← Back

Feature Extraction

Topic spotlight
TopicWorld Wide

feature extraction

Discover seminars, jobs, and research tagged with feature extraction across World Wide.
7 curated items6 Seminars1 ePoster
Updated about 2 years ago
7 items · feature extraction
7 results
SeminarNeuroscience

Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer

Junbeom Kwon
Nov 20, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916

SeminarNeuroscience

Multimodal framework and fusion of EEG, graph theory and sentiment analysis for the prediction and interpretation of consumer decision

Veeky Baths
Cognitive Neuroscience Lab (Bits Pilani Goa Campus)
Feb 2, 2022

The application of neuroimaging methods to marketing has recently gained lots of attention. In analyzing consumer behaviors, the inclusion of neuroimaging tools and methods is improving our understanding of consumer’s preferences. Human emotions play a significant role in decision making and critical thinking. Emotion classification using EEG data and machine learning techniques has been on the rise in the recent past. We evaluate different feature extraction techniques, feature selection techniques and propose the optimal set of features and electrodes for emotion recognition.Affective neuroscience research can help in detecting emotions when a consumer responds to an advertisement. Successful emotional elicitation is a verification of the effectiveness of an advertisement. EEG provides a cost effective alternative to measure advertisement effectiveness while eliminating several drawbacks of the existing market research tools which depend on self-reporting. We used Graph theoretical principles to differentiate brain connectivity graphs when a consumer likes a logo versus a consumer disliking a logo. The fusion of EEG and sentiment analysis can be a real game changer and this combination has the power and potential to provide innovative tools for market research.

SeminarNeuroscienceRecording

Motion processing across visual field locations in zebrafish

Aristides Arrenberg
University of Tuebingen
Sep 27, 2020

Animals are able to perceive self-motion and navigate in their environment using optic flow information. They often perform visually guided stabilization behaviors like the optokinetic (OKR) or optomotor response (OMR) in order to maintain their eye and body position relative to the moving surround. But how does the animal manage to perform appropriate behavioral response and how are processing tasks divided between the various non-cortical visual brain areas? Experiments have shown that the zebrafish pretectum, which is homologous to the mammalian accessory optic system, is involved in the OKR and OMR. The optic tectum (superior colliculus in mammals) is involved in processing of small stimuli, e.g. during prey capture. We have previously shown that many pretectal neurons respond selectively to rotational or translational motion. These neurons are likely detectors for specific optic flow patterns and mediate behavioral choices of the animal based on optic flow information. We investigate the motion feature extraction of brain structures that receive input from retinal ganglion cells to identify the visual computations that underlie behavioral decisions during prey capture, OKR, OMR and other visually mediate behaviors. Our study of receptive fields shows that receptive field sizes in pretectum (large) and tectum (small) are very different and that pretectal responses are diverse and anatomically organized. Since calcium indicators are slow and receptive fields for motion stimuli are difficult to measure, we also develop novel stimuli and statistical methods to infer the neuronal computations of visual brain areas.

SeminarNeuroscienceRecording

What the eye tells the brain: Visual feature extraction in the mouse retina

Katrin Franke
University of Tubingen
Jul 6, 2020

Visual processing begins in the retina: within only two synaptic layers, multiple parallel feature channels emerge, which relay highly processed visual information to different parts of the brain. To functionally characterize these feature channels we perform calcium and glutamate population activity recordings at different levels of the mouse retina. This allows following the complete visual signal across consecutive processing stages in a systematic way. In my talk, I will summarize our recent findings on the functional diversity of retinal output channels and how they arise within the retinal network. Specifically, I will talk about the role of inhibition and cell-type specific dendritic processing in generating diverse visual channels. Then, I will focus on how color – a single visual feature – emerges across all retinal processing layers and link our results to behavioral output and the statistics of mouse natural scenes. With our approach, we hope to identify general computational principles of retinal signaling, thereby increasing our understanding of what the eye tells the brain.

ePoster

Adversarial-inspired autoencoder framework for salient sensory feature extraction

Greta Horvathova, Dan Goodman

Bernstein Conference 2024