Intelligence
intelligence
Eugenio Piasini
Up to 6 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 7 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language and reading, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. This year, one of the PhD scholarships is set aside for joint PhD projects across PhD programs within the Neuroscience department (https://www.sissa.it/research/neuroscience). The selection procedure is now open. The application deadline is 28 March 2024. To learn how to apply, please visit https://phdcns.sissa.it/admission-procedure . Please contact the PhD Coordinator Mathew Diamond (diamond@sissa.it) and/or your prospective supervisor for more information and informal inquiries.
Tom Griffiths
The Department of Computer Science invites applications for a postdoctoral or more senior research position in Computational Cognitive Science, under the direction of Tom Griffiths. The position requires a Ph.D. and is focused on using mathematical, computational, and behavioral methods to understand the nature of intelligence. Specific research areas of interest include applications of large language models in cognitive science and use of Bayesian methods and metalearning to understand human cognition and AI systems.
Eugenio Piasini
Up to 6 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group hosts 7 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language and reading, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. This year, one of the PhD scholarships is set aside for joint PhD projects across PhD programs within the Neuroscience department.
Mathew Diamond
Up to 2 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 6 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. The selection procedure is now open. The application deadline is 27 August 2024. Please apply here (https://www.sissa.it/bandi/ammissione-ai-corsi-di-philosophiae-doctor-posizioni-cofinanziate-dal-fondo-sociale-europeo), and see the admission procedure page (https://phdcns.sissa.it/admission-procedure) for more information. Note that the positions available for current admission round are those funded by the 'Fondo Sociale Europeo Plus', accessible through the first link above.
Aapo
The University of Helsinki is hiring Research Fellows (advanced postdocs or beginning PIs) on the topic of Interdisciplinary Approaches to Intelligence. This is a collaborative framework combining computer science, neuroscience, and the humanities.
Eugenio Piasini
Up to 6 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2025. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 6 research labs that study the neuronal bases of time and magnitude processing, neuronal foundations of perceptual experience and learning in various sensory modalities, motivation and intelligence, language, and neural computation. Our research is highly interdisciplinary; our approaches include behavioral, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. The selection procedure is now open. The application deadline for the spring admission round is 20 March 2025 at 1pm CET. Please apply here, and see the admission procedure page for more information. Please contact the PhD Coordinator Mathew Diamond (diamond@sissa.it) and/or your prospective supervisor for more information and informal inquiries.
A personal journey on understanding intelligence
The focus of this talk is not about my research in AI or Robotics but my own journey on trying to do research and understand intelligence in a rapidly evolving research landscape. I will trace my path from conducting early-stage research during graduate school, to working on practical solutions within a startup environment, and finally to my current role where I participate in more structured research at a major tech company. Through these varied experiences, I will provide different perspectives on research and talk about how my core beliefs on intelligence have changed and sometimes even been compromised. There are no lessons to be learned from my stories, but hopefully they will be entertaining.
Short and Synthetically Distort: Investor Reactions to Deepfake Financial News
Recent advances in artificial intelligence have led to new forms of misinformation, including highly realistic “deepfake” synthetic media. We conduct three experiments to investigate how and why retail investors react to deepfake financial news. Results from the first two experiments provide evidence that investors use a “realism heuristic,” responding more intensely to audio and video deepfakes as their perceptual realism increases. In the third experiment, we introduce an intervention to prompt analytical thinking, varying whether participants make analytical judgments about credibility or intuitive investment judgments. When making intuitive investment judgments, investors are strongly influenced by both more and less realistic deepfakes. When making analytical credibility judgments, investors are able to discern the non-credibility of less realistic deepfakes but struggle with more realistic deepfakes. Thus, while analytical thinking can reduce the impact of less realistic deepfakes, highly realistic deepfakes are able to overcome this analytical scrutiny. Our results suggest that deepfake financial news poses novel threats to investors.
Memory Decoding Journal Club: Reconstructing a new hippocampal engram for systems reconsolidation and remote memory updating
Join us for the Memory Decoding Journal Club, a collaboration between the Carboncopies Foundation and BPF Aspirational Neuroscience. This month, we're diving into a groundbreaking paper: 'Reconstructing a new hippocampal engram for systems reconsolidation and remote memory updating' by Bo Lei, Bilin Kang, Yuejun Hao, Haoyu Yang, Zihan Zhong, Zihan Zhai, and Yi Zhong from Tsinghua University, Beijing Academy of Artificial Intelligence, IDG/McGovern Institute of Brain Research, and Peking Union Medical College. Dr. Randal Koene will guide us through an engaging discussion on these exciting findings and their implications for neuroscience and memory research.
Active Predictive Coding and the Primacy of Actions in Natural and Artificial Intelligence
Brain Emulation Challenge Workshop
Brain Emulation Challenge workshop will tackle cutting-edge topics such as ground-truthing for validation, leveraging artificial datasets generated from virtual brain tissue, and the transformative potential of virtual brain platforms, such as applied to the forthcoming Brain Emulation Challenge.
The Brain Prize winners' webinar
This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.
LLMs and Human Language Processing
This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.
Llama 3.1 Paper: The Llama Family of Models
Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a new set of foundation models, called Llama 3. It is a herd of language models that natively support multilinguality, coding, reasoning, and tool usage. Our largest model is a dense Transformer with 405B parameters and a context window of up to 128K tokens. This paper presents an extensive empirical evaluation of Llama 3. We find that Llama 3 delivers comparable quality to leading language models such as GPT-4 on a plethora of tasks. We publicly release Llama 3, including pre-trained and post-trained versions of the 405B parameter language model and our Llama Guard 3 model for input and output safety. The paper also presents the results of experiments in which we integrate image, video, and speech capabilities into Llama 3 via a compositional approach. We observe this approach performs competitively with the state-of-the-art on image, video, and speech recognition tasks. The resulting models are not yet being broadly released as they are still under development.
Trends in NeuroAI - Brain-like topography in transformers (Topoformer)
Dr. Nicholas Blauch will present on his work "Topoformer: Brain-like topographic organization in transformer language models through spatial querying and reweighting". Dr. Blauch is a postdoctoral fellow in the Harvard Vision Lab advised by Talia Konkle and George Alvarez. Paper link: https://openreview.net/pdf?id=3pLMzgoZSA Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).
Generative models for video games (rescheduled)
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Generative models for video games
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine
Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task representation that mirrored improved behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent struture of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such orthogonalized representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.
Trends in NeuroAI - Unified Scalable Neural Decoding (POYO)
Lead author Mehdi Azabou will present on his work "POYO-1: A Unified, Scalable Framework for Neural Population Decoding" (https://poyo-brain.github.io/). Mehdi is an ML PhD student at Georgia Tech advised by Dr. Eva Dyer. Paper link: https://arxiv.org/abs/2310.16046 Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).
Reimagining the neuron as a controller: A novel model for Neuroscience and AI
We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.
Using Adversarial Collaboration to Harness Collective Intelligence
There are many mysteries in the universe. One of the most significant, often considered the final frontier in science, is understanding how our subjective experience, or consciousness, emerges from the collective action of neurons in biological systems. While substantial progress has been made over the past decades, a unified and widely accepted explanation of the neural mechanisms underpinning consciousness remains elusive. The field is rife with theories that frequently provide contradictory explanations of the phenomenon. To accelerate progress, we have adopted a new model of science: adversarial collaboration in team science. Our goal is to test theories of consciousness in an adversarial setting. Adversarial collaboration offers a unique way to bolster creativity and rigor in scientific research by merging the expertise of teams with diverse viewpoints. Ideally, we aim to harness collective intelligence, embracing various perspectives, to expedite the uncovering of scientific truths. In this talk, I will highlight the effectiveness (and challenges) of this approach using selected case studies, showcasing its potential to counter biases, challenge traditional viewpoints, and foster innovative thought. Through the joint design of experiments, teams incorporate a competitive aspect, ensuring comprehensive exploration of problems. This method underscores the importance of structured conflict and diversity in propelling scientific advancement and innovation.
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: Brain-optimized inference improves reconstructions of fMRI brain activity Abstract: The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas. Speaker: Reese Kneeland is a Ph.D. student at the University of Minnesota working in the Naselaris lab. Paper link: https://arxiv.org/abs/2312.07705
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812
Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916
Use of Artificial Intelligence by Law Enforcement Authorities in the EU
Recently, artificial intelligence (AI) has become a global priority. Rapid and ongoing technological advancements in AI have prompted European legislative initiatives to regulate its use. In April 2021, the European Commission submitted a proposal for a Regulation that would harmonize artificial intelligence rules across the EU, including the law enforcement sector. Consequently, law enforcement officials await the outcome of the ongoing inter-institutional negotiations (trilogue) with great anticipation, as it will define how to capitalize on the opportunities presented by AI and how to prevent criminals from abusing this emergent technology.
How fly neurons compute the direction of visual motion
Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits, involving a comparison of the signals from neighboring photoreceptors over time. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Much progress has been made in recent years in the fruit fly Drosophila melanogaster by genetically targeting individual neuron types to block, activate or record from them. Our results obtained this way demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.
BrainLM Journal Club
Connor Lane will lead a journal club on the recent BrainLM preprint, a foundation model for fMRI trained using self-supervised masked autoencoder training. Preprint: https://www.biorxiv.org/content/10.1101/2023.09.12.557460v1 Tweeprint: https://twitter.com/david_van_dijk/status/1702336882301112631?t=Q2-U92-BpJUBh9C35iUbUA&s=19
Foundation models in ophthalmology
Abstract to follow.
Cognitive Computational Neuroscience 2023
CCN is an annual conference that serves as a forum for cognitive science, neuroscience, and artificial intelligence researchers dedicated to understanding the computations that underlie complex behavior.
Algonauts 2023 winning paper journal club (fMRI encoding models)
Algonauts 2023 was a challenge to create the best model that predicts fMRI brain activity given a seen image. Huze team dominated the competition and released a preprint detailing their process. This journal club meeting will involve open discussion of the paper with Q/A with Huze. Paper: https://arxiv.org/pdf/2308.01175.pdf Related paper also from Huze that we can discuss: https://arxiv.org/pdf/2307.14021.pdf
1.8 billion regressions to predict fMRI (journal club)
Public journal club where this week Mihir will present on the 1.8 billion regressions paper (https://www.biorxiv.org/content/10.1101/2022.03.28.485868v2), where the authors use hundreds of pretrained model embeddings to best predict fMRI activity.
In search of the unknown: Artificial intelligence and foraging
Diverse applications of artificial intelligence and mathematical approaches in ophthalmology
Ophthalmology is ideally placed to benefit from recent advances in artificial intelligence. It is a highly image-based specialty and provides unique access to the microvascular circulation and the central nervous system. This talk will demonstrate diverse applications of machine learning and deep learning techniques in ophthalmology, including in age-related macular degeneration (AMD), the leading cause of blindness in industrialized countries, and cataract, the leading cause of blindness worldwide. This will include deep learning approaches to automated diagnosis, quantitative severity classification, and prognostic prediction of disease progression, both from images alone and accompanied by demographic and genetic information. The approaches discussed will include deep feature extraction, label transfer, and multi-modal, multi-task training. Cluster analysis, an unsupervised machine learning approach to data classification, will be demonstrated by its application to geographic atrophy in AMD, including exploration of genotype-phenotype relationships. Finally, mediation analysis will be discussed, with the aim of dissecting complex relationships between AMD disease features, genotype, and progression.
Consciousness in the age of mechanical minds
We are now clearly entering a new age in our relationship with machines. The power of AI natural language processors and image generators has rapidly exceeded the expectations of even those who developed them. Serious questions are now being asked about the extent to which machines could become — or perhaps already are — sentient or conscious. Do AI machines understand the instructions they are given and the answers they provide? In this talk I will consider the prospects for conscious machines, by which I mean machines that have feelings, know about their own existence, and about ours. I will suggest that the recent focus on information processing in models of consciousness, in which the brain is treated as a kind of digital computer, have mislead us about the nature of consciousness and how it is produced in biological systems. Treating the brain as an energy processing system is more likely to yield answers to these fundamental questions and help us understand how and when machines might become minds.
Brain and Behavior: Employing Frequency Tagging as a Tool for Measuring Cognitive Abilities
Frequency tagging based on fast periodic visual stimulation (FPVS) provides a window into ongoing visual and cognitive processing and can be leveraged to measure rule learning and high-level categorization. In this talk, I will present data demonstrating highly proficient categorization as living and non-living in preschool children, and characterize the development of this ability during infancy. In addition to associating cognitive functions with development, an intriguing question is whether frequency tagging also captures enduring individual differences, e.g. in general cognitive abilities. First studies indicate high psychometric quality of FPVS categorization responses (XU et al., Dzhelyova), providing a basis for research on individual differences. I will present results from a pilot study demonstrating high correlations between FPVS categorization responses and behavioral measures of processing speed and fluid intelligences. Drawing upon this first evidence, I will discuss the potential of frequency tagging for diagnosing cognitive functions across development.
How AI is advancing Clinical Neuropsychology and Cognitive Neuroscience
This talk aims to highlight the immense potential of Artificial Intelligence (AI) in advancing the field of psychology and cognitive neuroscience. Through the integration of machine learning algorithms, big data analytics, and neuroimaging techniques, AI has the potential to revolutionize the way we study human cognition and brain characteristics. In this talk, I will highlight our latest scientific advancements in utilizing AI to gain deeper insights into variations in cognitive performance across the lifespan and along the continuum from healthy to pathological functioning. The presentation will showcase cutting-edge examples of AI-driven applications, such as deep learning for automated scoring of neuropsychological tests, natural language processing to characeterize semantic coherence of patients with psychosis, and other application to diagnose and treat psychiatric and neurological disorders. Furthermore, the talk will address the challenges and ethical considerations associated with using AI in psychological research, such as data privacy, bias, and interpretability. Finally, the talk will discuss future directions and opportunities for further advancements in this dynamic field.
Estimating repetitive spatiotemporal patterns from resting-state brain activity data
Repetitive spatiotemporal patterns in resting-state brain activities have been widely observed in various species and regions, such as rat and cat visual cortices. Since they resemble the preceding brain activities during tasks, they are assumed to reflect past experiences embedded in neuronal circuits. Moreover, spatiotemporal patterns involving whole-brain activities may also reflect a process that integrates information distributed over the entire brain, such as motor and visual information. Therefore, revealing such patterns may elucidate how the information is integrated to generate consciousness. In this talk, I will introduce our proposed method to estimate repetitive spatiotemporal patterns from resting-state brain activity data and show the spatiotemporal patterns estimated from human resting-state magnetoencephalography (MEG) and electroencephalography (EEG) data. Our analyses suggest that the patterns involved whole-brain propagating activities that reflected a process to integrate the information distributed over frequencies and networks. I will also introduce our current attempt to reveal signal flows and their roles in the spatiotemporal patterns using a big dataset. - Takeda et al., Estimating repetitive spatiotemporal patterns from resting-state brain activity data. NeuroImage (2016); 133:251-65. - Takeda et al., Whole-brain propagating patterns in human resting-state brain activities. NeuroImage (2021); 245:118711.
Cognition in the Wild
What do nonhuman primates know about each other and their social environment, how do they allocate their attention, and what are the functional consequences of social decisions in natural settings? Addressing these questions is crucial to hone in on the co-evolution of cognition, social behaviour and communication, and ultimately the evolution of intelligence in the primate order. I will present results from field experimental and observational studies on free-ranging baboons, which tap into the cognitive abilities of these animals. Baboons are particularly valuable in this context as different species reveal substantial variation in social organization and degree of despotism. Field experiments revealed considerable variation in the allocation of social attention: while the competitive chacma baboons were highly sensitive to deviations from the social order, the highly tolerant Guinea baboons revealed a confirmation bias. This bias may be a result of the high gregariousness of the species, which puts a premium on ignoring social noise. Variation in despotism clearly impacted the use of signals to regulate social interactions. For instance, male-male interactions in chacma baboons mostly comprised dominance displays, while Guinea baboon males evolved elaborate greeting rituals that serve to confirm group membership and test social bonds. Strikingly, the structure of signal repertoires does not differ substantially between different baboon species. In conclusion, the motivational disposition to engage in affiliation or aggressiveness appears to be more malleable during evolution than structural elements of the behavioral repertoire; this insight is crucial for understanding the dynamics of social evolution.
Deep learning applications in ophthalmology
Deep learning techniques have revolutionized the field of image analysis and played a disruptive role in the ability to quickly and efficiently train image analysis models that perform as well as human beings. This talk will cover the beginnings of the application of deep learning in the field of ophthalmology and vision science, and cover a variety of applications of using deep learning as a method for scientific discovery and latent associations.
AI for Multi-centre Epilepsy Lesion Detection on MRI
Epilepsy surgery is a safe but underutilised treatment for drug-resistant focal epilepsy. One challenge in the presurgical evaluation of patients with drug-resistant epilepsy are patients considered “MRI negative”, i.e. where a structural brain abnormality has not been identified on MRI. A major pathology in “MRI negative” patients is focal cortical dysplasia (FCD), where lesions are often small or subtle and easily missed by visual inspection. In recent years, there has been an explosion in artificial intelligence (AI) research in the field of healthcare. Automated FCD detection is an area where the application of AI may translate into significant improvements in the presurgical evaluation of patients with focal epilepsy. I will provide an overview of our automated FCD detection work, the Multicentre Epilepsy Lesion Detection (MELD) project and how AI algorithms are beginning to be integrated into epilepsy presurgical planning at Great Ormond Street Hospital and elsewhere around the world. Finally, I will discuss the challenges and future work required to bring AI to the forefront of care for patients with epilepsy.
Children-Agent Interaction For Assessment and Rehabilitation: From Linguistic Skills To Mental Well-being
Socially Assistive Robots (SARs) have shown great potential to help children in therapeutic and healthcare contexts. SARs have been used for companionship, learning enhancement, social and communication skills rehabilitation for children with special needs (e.g., autism), and mood improvement. Robots can be used as novel tools to assess and rehabilitate children’s communication skills and mental well-being by providing affordable and accessible therapeutic and mental health services. In this talk, I will present the various studies I have conducted during my PhD and at the Cambridge Affective Intelligence and Robotics Lab to explore how robots can help assess and rehabilitate children’s communication skills and mental well-being. More specifically, I will provide both quantitative and qualitative results and findings from (i) an exploratory study with children with autism and global developmental disorders to investigate the use of intelligent personal assistants in therapy; (ii) an empirical study involving children with and without language disorders interacting with a physical robot, a virtual agent, and a human counterpart to assess their linguistic skills; (iii) an 8-week longitudinal study involving children with autism and language disorders who interacted either with a physical or a virtual robot to rehabilitate their linguistic skills; and (iv) an empirical study to aid the assessment of mental well-being in children. These findings can inform and help the child-robot interaction community design and develop new adaptive robots to help assess and rehabilitate linguistic skills and mental well-being in children.
Does subjective time interact with the heart rate?
Decades of research have investigated the relationship between perception of time and heart rate with often mixed results. In search of such a relationship, I will present my far journey between two projects: from time perception in the realistic VR experience of crowded subway trips in the order of minutes (project 1); to the perceived duration of sub-second white noise tones (project 2). Heart rate had multiple concurrent relationships with subjective temporal distortions for the sub-second tones, while the effects were lacking or weak for the supra-minute subway trips. What does the heart have to do with sub-second time perception? We addressed this question with a cardiac drift-diffusion model, demonstrating the sensory accumulation of temporal evidence as a function of heart rate.
Affective Intelligence in Digital Psychiatry: Would Wundt Woo?
Two sides of emotion expressions: Readouts and Regulators
Maths, AI and Neuroscience Meeting Stockholm
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent.
Active vision in Drosophila
On the link between conscious function and general intelligence in humans and machines
In popular media, there is often a connection drawn between the advent of awareness in artificial agents and those same agents simultaneously achieving human or superhuman level intelligence. In this talk, I will examine the validity and potential application of this seemingly intuitive link between consciousness and intelligence. I will do so by examining the cognitive abilities associated with three contemporary theories of conscious function: Global Workspace Theory (GWT), Information Generation Theory (IGT), and Attention Schema Theory (AST), and demonstrating that all three theories specifically relate conscious function to some aspect of domain-general intelligence in humans. With this insight, we will turn to the field of Artificial Intelligence (AI) and find that, while still far from demonstrating general intelligence, many state-of-the-art deep learning methods have begun to incorporate key aspects of each of the three functional theories. Given this apparent trend, I will use the motivating example of mental time travel in humans to propose ways in which insights from each of the three theories may be combined into a unified model. I believe that doing so can enable the development of artificial agents which are not only more generally intelligent but are also consistent with multiple current theories of conscious function.
Do large language models solve verbal analogies like children do?
Analogical reasoning –learning about new things by relating it to previous knowledge– lies at the heart of human intelligence and creativity and forms the core of educational practice. Children start creating and using analogies early on, making incredible progress moving from associative processes to successful analogical reasoning. For example, if we ask a four-year-old “Horse belongs to stable like chicken belongs to …?” they may use association and reply “egg”, whereas older children will likely give the intended relational response “chicken coop” (or other term to refer to a chicken’s home). Interestingly, despite state-of-the-art AI-language models having superhuman encyclopedic knowledge and superior memory and computational power, our pilot studies show that these large language models often make mistakes providing associative rather than relational responses to verbal analogies. For example, when we asked four- to eight-year-olds to solve the analogy “body is to feet as tree is to …?” they responded “roots” without hesitation, but large language models tend to provide more associative responses such as “leaves”. In this study we examine the similarities and differences between children's and six large language models' (Dutch/multilingual models: RobBERT, BERT-je, M-BERT, GPT-2, M-GPT, Word2Vec and Fasttext) responses to verbal analogies extracted from an online adaptive learning environment, where >14,000 7-12 year-olds from the Netherlands solved 20 or more items from a database of 900 Dutch language verbal analogies.
Merging insights from artificial and biological neural networks for neuromorphic intelligence
Lifelong Learning AI via neuro inspired solutions
AI embedded in real systems, such as in satellites, robots and other autonomous devices, must make fast, safe decisions even when the environment changes, or under limitations on the available power; to do so, such systems must be adaptive in real time. To date, edge computing has no real adaptivity – rather the AI must be trained in advance, typically on a large dataset with much computational power needed; once fielded, the AI is frozen: It is unable to use its experience to operate if environment proves outside its training or to improve its expertise; and worse, since datasets cannot cover all possible real-world situations, systems with such frozen intelligent control are likely to fail. Lifelong Learning is the cutting edge of artificial intelligence - encompassing computational methods that allow systems to learn in runtime and incorporate learning for application in new, unanticipated situations. Until recently, this sort of computation has been found exclusively in nature; thus, Lifelong Learning looks to nature, and in particular neuroscience, for its underlying principles and mechanisms and then translates them to this new technology. Our presentation will introduce a number of state-of-the-art approaches to achieve AI adaptive learning, including from the DARPA’s L2M program and subsequent developments. Many environments are affected by temporal changes, such as the time of day, week, season, etc. A way to create adaptive systems which are both small and robust is by making them aware of time and able to comprehend temporal patterns in the environment. We will describe our current research in temporal AI, while also considering power constraints.
Associative memory of structured knowledge
A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may represent e.g., temporal order, spatial location, role in semantic structure), and map each structure to a distributed neuronal activity pattern using a vector symbolic architecture (VSA) scheme. We then use associative memory plasticity rules to store the binarized patterns as fixed points in a recurrent network. By a combination of signal-to-noise analysis and numerical simulations, we demonstrate that our model allows for efficient storage of these knowledge structures, such that the memorized structures as well as their individual building blocks (e.g., events and attributes) can be subsequently retrieved from partial retrieving cues. We show that long-term memory of structured knowledge relies on a new principle of computation beyond the memory basins. Finally, we show that our model can be extended to store sequences of memories as single attractors.
What do neurons want?
From Machine Learning to Autonomous Intelligence
How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable.
From Machine Learning to Autonomous Intelligence
How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self-supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable. The corresponding working paper is available here:https://openreview.net/forum?id=BZ5a1r-kVsf
AI-assisted language learning: Assessing learners who memorize and reason by analogy
Vocabulary learning applications like Duolingo have millions of users around the world, but yet are based on very simple heuristics to choose teaching material to provide to their users. In this presentation, we will discuss the possibility to develop more advanced artificial teachers, which would be based on modeling of the learner’s inner characteristics. In the case of teaching vocabulary, understanding how the learner memorizes is enough. When it comes to picking grammar exercises, it becomes essential to assess how the learner reasons, in particular by analogy. This second application will illustrate how analogical and case-based reasoning can be employed in an alternative way in education: not as the teaching algorithm, but as a part of the learner’s model.
Building System Models of Brain-Like Visual Intelligence with Brain-Score
Research in the brain and cognitive sciences attempts to uncover the neural mechanisms underlying intelligent behavior in domains such as vision. Due to the complexities of brain processing, studies necessarily had to start with a narrow scope of experimental investigation and computational modeling. I argue that it is time for our field to take the next step: build system models that capture a range of visual intelligence behaviors along with the underlying neural mechanisms. To make progress on system models, we propose integrative benchmarking – integrating experimental results from many laboratories into suites of benchmarks that guide and constrain those models at multiple stages and scales. We show-case this approach by developing Brain-Score benchmark suites for neural (spike rates) and behavioral experiments in the primate visual ventral stream. By systematically evaluating a wide variety of model candidates, we not only identify models beginning to match a range of brain data (~50% explained variance), but also discover that models’ brain scores are predicted by their object categorization performance (up to 70% ImageNet accuracy). Using the integrative benchmarks, we develop improved state-of-the-art system models that more closely match shallow recurrent neuroanatomy and early visual processing to predict primate temporal processing and become more robust, and require fewer supervised synaptic updates. Taken together, these integrative benchmarks and system models are first steps to modeling the complexities of brain processing in an entire domain of intelligence.
Learning static and dynamic mappings with local self-supervised plasticity
Animals exhibit remarkable learning capabilities with little direct supervision. Likewise, self-supervised learning is an emergent paradigm in artificial intelligence, closing the performance gap to supervised learning. In the context of biology, self-supervised learning corresponds to a setting where one sense or specific stimulus may serve as a supervisory signal for another. After learning, the latter can be used to predict the former. On the implementation level, it has been demonstrated that such predictive learning can occur at the single neuron level, in compartmentalized neurons that separate and associate information from different streams. We demonstrate the power such self-supervised learning over unsupervised (Hebb-like) learning rules, which depend heavily on stimulus statistics, in two examples: First, in the context of animal navigation where predictive learning can associate internal self-motion information always available to the animal with external visual landmark information, leading to accurate path-integration in the dark. We focus on the well-characterized fly head direction system and show that our setting learns a connectivity strikingly similar to the one reported in experiments. The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate with different gains. Second, we show that incorporating global gating by reward prediction errors allows the same setting to learn conditioning at the neuronal level with mixed selectivity. At its core, conditioning entails associating a neural activity pattern induced by an unconditioned stimulus (US) with the pattern arising in response to a conditioned stimulus (CS). Solving the generic problem of pattern-to-pattern associations naturally leads to emergent cognitive phenomena like blocking, overshadowing, saliency effects, extinction, interstimulus interval effects etc. Surprisingly, we find that the same network offers a reductionist mechanism for causal inference by resolving the post hoc, ergo propter hoc fallacy.
Flexible multitask computation in recurrent networks utilizes shared dynamical motifs
Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.
A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens
We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119
Semantic Distance and Beyond: Interacting Predictors of Verbal Analogy Performance
Prior studies of A:B::C:D verbal analogies have identified several factors that affect performance, including the semantic similarity between source and target domains (semantic distance), the semantic association between the C-term and incorrect answers (distracter salience), and the type of relations between word pairs (e.g., categorical, compositional, and causal). However, it is unclear how these stimulus properties affect performance when utilized together. Moreover, how do these item factors interact with individual differences such as crystallized intelligence and creative thinking? Several studies reveal interactions among these item and individual difference factors impacting verbal analogy performance. For example, a three-way interaction demonstrated that the effects of semantic distance and distracter salience had a greater impact on performance for compositional and causal relations than for categorical ones (Jones, Kmiecik, Irwin, & Morrison, 2022). Implications for analogy theories and future directions are discussed.
Feedforward and feedback processes in visual recognition
Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.
Careers for neuroscience in Artificial Intelligence
The purpose of this event is twofold: to raise awareness of careers in AI to neuroscience postgraduate and Early Career Researchers (ECRs), and to give the chance for commercial organisations to acquire and diversify their talent pool. We know that our early career members are highly motivated and interested in different career pathways, and wish to help them fulfil their ambitions. This will be a hybrid event held in person at Arca Blanca, Covent Garden, London and also available online. FREE for BNA members!
Faking emotions and a therapeutic role for robots and chatbots: Ethics of using AI in psychotherapy
In recent years, there has been a proliferation of social robots and chatbots that are designed so that users make an emotional attachment with them. This talk will start by presenting the first such chatbot, a program called Eliza designed by Joseph Weizenbaum in the mid 1960s. Then we will look at some recent robots and chatbots with Eliza-like interfaces and examine their benefits as well as various ethical issues raised by deploying such systems.
On the Hunt: Ingenious Foraging Strategies in Bats & Spiders
Forensic use of face recognition systems for investigation
With the increasing development of automatic systems and artificial intelligence, face recognition is becoming increasingly important in forensic and civil contexts. However, face recognition has yet to be thoroughly empirically studied to provide an adequate scientific and legal framework for investigative and court purposes. This observation sets the foundation for the research. We focus on issues related to face images and the use of automatic systems. Our objective is to validate a likelihood ratio computation methodology for interpreting comparison scores from automatic face recognition systems (score-based likelihood ratio, SLR). We collected three types of traces: portraits (ID), video surveillance footage recorded by ATM and by a wide-angle camera (CCTV). The performance of two automatic face recognition systems is compared: the commercial IDEMIA Morphoface (MFE) system and the open source FaceNet algorithm.
Understanding Natural Language: Insights From Cognitive Science, Cognitive Neuroscience, and Artificial Intelligence
Non-invasive brain-machine interface control with artificial intelligence copilots
COSYNE 2025
Availability of information on artificial intelligence-enhanced hearing aids: A social media analysis
FENS Forum 2024
Cognitive and intelligence measures for ADHD identification by machine learning models
FENS Forum 2024
Constructing an artificial intelligence algorithm based on awake mouse brain calcium imaging as a rapid screening platform for the development of Parkinson's disease drugs
FENS Forum 2024
Development of NTS2-selective non-opioid analgesics using artificial intelligence
FENS Forum 2024
Semi-blind machine learning for fMRI-based predictions of intelligence
FENS Forum 2024
Intelligence Offloading and the Neurosimulation of Developmental Agents
Neuromatch 5
A Reservoir Model of Explicit Human Intelligence
Neuromatch 5