Latest

SeminarNeuroscience

How do protein-RNA condensates form and contribute to disease?

Jernej Ule
UK Dementia Research Institute
May 6, 2022

In recent years, it has become clear that intrinsically disordered regions (IDRs) of RBPs, and the structure of RNAs, often contribute to the condensation of RNPs. To understand the transcriptomic features of such RNP condensates, we’ve used an improved individual nucleotide resolution CLIP protocol (iiCLIP), which produces highly sensitive and specific data, and thus enables quantitative comparisons of interactions across conditions (Lee et al., 2021). This showed how the IDR-dependent condensation properties of TDP-43 specify its RNA binding and regulatory repertoire (Hallegger et al., 2021). Moreover, we developed software for discovery and visualisation of RNA binding motifs that uncovered common binding patterns of RBPs on long multivalent RNA regions that are composed of dispersed motif clusters (Kuret et al, 2021). Finally, we used hybrid iCLIP (hiCLIP) to characterise the RNA structures mediating the assembly of Staufen RNPs across mammalian brain development, which demonstrated the roles of long-range RNA duplexes in the compaction of long 3’UTRs. I will present how the combined analysis of the characteristics of IDRs in RBPs, multivalent RNA regions and RNA structures is required to understand the formation and functions of RNP condensates, and how they change in diseases.

SeminarNeuroscienceRecording

NMC4 Short Talk: Image embeddings informed by natural language improve predictions and understanding of human higher-level visual cortex

Aria Wang
Carnegie Mellon University
Dec 1, 2021

To better understand human scene understanding, we extracted features from images using CLIP, a neural network model of visual concept trained with supervision from natural language. We then constructed voxelwise encoding models to explain whole brain responses arising from viewing natural images from the Natural Scenes Dataset (NSD) - a large-scale fMRI dataset collected at 7T. Our results reveal that CLIP, as compared to convolution based image classification models such as ResNet or AlexNet, as well as language models such as BERT, gives rise to representations that enable better prediction performance - up to a 0.86 correlation with test data and an r-square of 0.75 - in higher-level visual cortex in humans. Moreover, CLIP representations explain distinctly unique variance in these higher-level visual areas as compared to models trained with only images or text. Control experiments show that the improvement in prediction observed with CLIP is not due to architectural differences (transformer vs. convolution) or to the encoding of image captions per se (vs. single object labels). Together our results indicate that CLIP and, more generally, multimodal models trained jointly on images and text, may serve as better candidate models of representation in human higher-level visual cortex. The bridge between language and vision provided by jointly trained models such as CLIP also opens up new and more semantically-rich ways of interpreting the visual brain.

SeminarNeuroscienceRecording

Deep kernel methods

Laurence Aitchison
University of Bristol
Nov 25, 2021

Deep neural networks (DNNs) with the flexibility to learn good top-layer representations have eclipsed shallow kernel methods without that flexibility. Here, we take inspiration from deep neural networks to develop a new family of deep kernel method. In a deep kernel method, there is a kernel at every layer, and the kernels are jointly optimized to improve performance (with strong regularisation). We establish the representational power of deep kernel methods, by showing that they perform exact inference in an infinitely wide Bayesian neural network or deep Gaussian process. Next, we conjecture that the deep kernel machine objective is unimodal, and give a proof of unimodality for linear kernels. Finally, we exploit the simplicity of the deep kernel machine loss to develop a new family of optimizers, based on a matrix equation from control theory, that converges in around 10 steps.

SeminarNeuroscienceRecording

It’s not what you look at that matters, it’s what you see

Yaara Yeshurun
Tel Aviv University
Aug 5, 2020

People frequently interpret the same information differently, based on their prior beliefs and views. This may occur in everyday settings, as when two friends are watching the same movie, but also in more consequential circumstances, such as when people interpret the same news differently based on their political views. The role of subjective knowledge in altering how the brain processes narratives has been explored mainly in controlled settings. I will present two projects that examines neural mechanisms underlying narrative interpretation “in the wild” -- how responses differ between two groups of people who interpret the same narrative in two coherent, but opposing ways. In the first project we manipulated participant’s prior knowledge to make them interpret the narrative differently, and found that responses in high-order areas, including the default mode network, language areas and subsets of the mirror neuron system, tend to be similar among people who share the same interpretation, but different from people with an opposing interpretation. In contrast to the active manipulation of participants’ interpretation in the first study, in the second (ongoing) project we examine these processes in a more ecological setting. Taking advantage of people’s natural tendencies to interpret the world through their own (political) filters, we examine these mechanisms while measuring their brain response to political movie clips. These studies are intended to deepen our understanding of the differences in subjective construal processes, by mapping their underlying brain mechanisms.

CLIP coverage

4 items

Seminar4
Domain spotlight

Explore how CLIP research is advancing inside Neuro.

Visit domain