Graphs
graphs
N/A
We are hiring two PostDocs to join the Pattern Analysis and Computer Vison (PAVIS) Research Line coordinated by Dr. Alessio Del Bue. The positions are part of a joint multidisciplinary effort between PAVIS, the Atomistic Simulation (ATSIM) Research Line led by Prof. Michele Parrinello, the Computational Facility led by Dr. Sergio Decherchi and Dompè Pharmaceutical. The focus is on the study of novel ML and DL methods that can efficiently incorporate priors and constraints related to physical models, with special focus on self-supervised and generative modelling. The goal is to develop models applicable to IIT interdisciplinary research, especially in drug discovery and molecular modeling for large scale datasets, leveraging IIT’s HPC computational facilities.
Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake
Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.
From controlled environments to complex realities: Exploring the interplay between perceived minds and attention
In our daily lives, we perceive things as possessing a mind (e.g., people) or lacking one (e.g., shoes). Intriguingly, how much mind we attribute to people can vary, with real people perceived to have more mind than depictions of individuals, such as photographs. Drawing from a range of research methodologies, including naturalistic observation, mobile eye tracking, and surreptitious behavior monitoring, I discuss how various shades of mind influence human attention and behaviour. The findings suggest the novel concept that overt attention (where one looks) in real-life is fundamentally supported by covert attention (attending to someone out of the corner of one's eye).
Multimodal framework and fusion of EEG, graph theory and sentiment analysis for the prediction and interpretation of consumer decision
The application of neuroimaging methods to marketing has recently gained lots of attention. In analyzing consumer behaviors, the inclusion of neuroimaging tools and methods is improving our understanding of consumer’s preferences. Human emotions play a significant role in decision making and critical thinking. Emotion classification using EEG data and machine learning techniques has been on the rise in the recent past. We evaluate different feature extraction techniques, feature selection techniques and propose the optimal set of features and electrodes for emotion recognition.Affective neuroscience research can help in detecting emotions when a consumer responds to an advertisement. Successful emotional elicitation is a verification of the effectiveness of an advertisement. EEG provides a cost effective alternative to measure advertisement effectiveness while eliminating several drawbacks of the existing market research tools which depend on self-reporting. We used Graph theoretical principles to differentiate brain connectivity graphs when a consumer likes a logo versus a consumer disliking a logo. The fusion of EEG and sentiment analysis can be a real game changer and this combination has the power and potential to provide innovative tools for market research.
Spatial alignment supports visual comparisons
Visual comparisons are ubiquitous, and they can also be an important source for learning (e.g., Gentner et al., 2016; Kok et al., 2013). In science, technology, engineering, and math (STEM), key information is often conveyed through figures, graphs, and diagrams (Mayer, 1993). Comparing within and across visuals is critical for gleaning insight into the underlying concepts, structures, and processes that they represent. This talk addresses how people make visual comparisons and how visual comparisons can be best supported to improve learning. In particular, the talk will present a series of studies exploring the Spatial Alignment Principle (Matlen et al., 2020), derived from Structure-Mapping Theory (Gentner, 1983). Structure-mapping theory proposes that comparisons involve a process of finding correspondences between elements based on structured relationships. The Spatial Alignment Principle suggests that spatially arranging compared figures directly – to support correct correspondences and minimize interference from incorrect correspondences – will facilitate visual comparisons. We find that direct placement can facilitate visual comparison in educationally relevant stimuli, and that it may be especially important when figures are less familiar. We also present complementary evidence illustrating the preponderance of visual comparisons in 7th grade science textbooks.
GuPPy, a Python toolbox for the analysis of fiber photometry data
Fiber photometry (FP) is an adaptable method for recording in vivo neural activity in freely behaving animals. It has become a popular tool in neuroscience due to its ease of use, low cost, the ability to combine FP with freely moving behavior, among other advantages. However, analysis of FP data can be a challenge for new users, especially those with a limited programming background. Here, we present Guided Photometry Analysis in Python (GuPPy), a free and open-source FP analysis tool. GuPPy is provided as a Jupyter notebook, a well-commented interactive development environment (IDE) designed to operate across platforms. GuPPy presents the user with a set of graphic user interfaces (GUIs) to load data and provide input parameters. Graphs produced by GuPPy can be exported into various image formats for integration into scientific figures. As an open-source tool, GuPPy can be modified by users with knowledge of Python to fit their specific needs.
Spike-based embeddings for multi-relational graph data
A rich data representation that finds wide application in industry and research is the so-called knowledge graph - a graph-based structure where entities are depicted as nodes and relations between them as edges. Complex systems like molecules, social networks and industrial factory systems can be described using the common language of knowledge graphs, allowing the usage of graph embedding algorithms to make context-aware predictions in these information-packed environments.
Learning the structure and investigating the geometry of complex networks
Networks are widely used as mathematical models of complex systems across many scientific disciplines, and in particular within neuroscience. In this talk, we introduce two aspects of our collaborative research: (1) machine learning and networks, and (2) graph dimensionality. Machine learning and networks. Decades of work have produced a vast corpus of research characterising the topological, combinatorial, statistical and spectral properties of graphs. Each graph property can be thought of as a feature that captures important (and sometimes overlapping) characteristics of a network. We have developed hcga, a framework for highly comparative analysis of graph data sets that computes several thousands of graph features from any given network. Taking inspiration from hctsa, hcga offers a suite of statistical learning and data analysis tools for automated identification and selection of important and interpretable features underpinning the characterisation of graph data sets. We show that hcga outperforms other methodologies (including deep learning) on supervised classification tasks on benchmark data sets whilst retaining the interpretability of network features, which we exemplify on a dataset of neuronal morphologies images. Graph dimensionality. Dimension is a fundamental property of objects and the space in which they are embedded. Yet ideal notions of dimension, as in Euclidean spaces, do not always translate to physical spaces, which can be constrained by boundaries and distorted by inhomogeneities, or to intrinsically discrete systems such as networks. Deviating from approaches based on fractals, here, we present a new framework to define intrinsic notions of dimension on networks, the relative, local and global dimension. We showcase our method on various physical systems.
Introducing YAPiC: An Open Source tool for biologists to perform complex image segmentation with deep learning
Robust detection of biological structures such as neuronal dendrites in brightfield micrographs, tumor tissue in histological slides, or pathological brain regions in MRI scans is a fundamental task in bio-image analysis. Detection of those structures requests complex decision making which is often impossible with current image analysis software, and therefore typically executed by humans in a tedious and time-consuming manual procedure. Supervised pixel classification based on Deep Convolutional Neural Networks (DNNs) is currently emerging as the most promising technique to solve such complex region detection tasks. Here, a self-learning artificial neural network is trained with a small set of manually annotated images to eventually identify the trained structures from large image data sets in a fully automated way. While supervised pixel classification based on faster machine learning algorithms like Random Forests are nowadays part of the standard toolbox of bio-image analysts (e.g. Ilastik), the currently emerging tools based on deep learning are still rarely used. There is also not much experience in the community how much training data has to be collected, to obtain a reasonable prediction result with deep learning based approaches. Our software YAPiC (Yet Another Pixel Classifier) provides an easy-to-use Python- and command line interface and is purely designed for intuitive pixel classification of multidimensional images with DNNs. With the aim to integrate well in the current open source ecosystem, YAPiC utilizes the Ilastik user interface in combination with a high performance GPU server for model training and prediction. Numerous research groups at our institute have already successfully applied YAPiC for a variety of tasks. From our experience, a surprisingly low amount of sparse label data is needed to train a sufficiently working classifier for typical bioimaging applications. Not least because of this, YAPiC has become the "standard weapon” for our core facility to detect objects in hard-to-segement images. We would like to present some use cases like cell classification in high content screening, tissue detection in histological slides, quantification of neural outgrowth in phase contrast time series, or actin filament detection in transmission electron microscopy.
Modularity of attractors in inhibition-dominated TLNs
Threshold-linear networks (TLNs) display a wide variety of nonlinear dynamics including multistability, limit cycles, quasiperiodic attractors, and chaos. Over the past few years, we have developed a detailed mathematical theory relating stable and unstable fixed points of TLNs to graph-theoretic properties of the underlying network. In particular, we have discovered that a special type of unstable fixed points, corresponding to "core motifs," are predictive of dynamic attractors. Recently, we have used these ideas to classify dynamic attractors in a two-parameter family of inhibition-dominated TLNs spanning all 9608 directed graphs of size n=5. Remarkably, we find a striking modularity in the dynamic attractors, with identical or near-identical attractors arising in networks that are otherwise dynamically inequivalent. This suggests that, just as one can store multiple static patterns as stable fixed points in a Hopfield model, a variety of dynamic attractors can also be embedded in a TLN in a modular fashion.
Accuracy versus consistency: Investigating face and voice matching abilities
Deciding whether two different face photographs or voice samples are from the same person represent fundamental challenges within applied settings. To date, most research has focussed on average performance in these tests, failing to consider individual differences and within-person consistency in responses. In the current studies, participants completed the same face or voice matching test on two separate occasions, allowing comparison of overall accuracy across the two timepoints as well as consistency in trial-level responses. In both experiments, participants were highly consistent in their performances. In addition, we demonstrated a large association between consistency and accuracy, with the most accurate participants also tending to be the most consistent. This is an important result for applied settings in which organisational groups of super-matchers are deployed in real-world contexts. Being able to reliably identify these high performers based upon only a single test informs regarding recruitment for law enforcement agencies worldwide.