Methodologies
methodologies
Takashi Hashimoto
Candidate is expected to conduct active research and education in the field of Emergent AI studies. This involves using data science and AI technology to generate scientific principles through co-creative investigation on principles, problem solving, and design underlying social, organizational, human, natural phenomena, and art, and through the exploring methodologies for such principles.
An inconvenient truth: pathophysiological remodeling of the inner retina in photoreceptor degeneration
Photoreceptor loss is the primary cause behind vision impairment and blindness in diseases such as retinitis pigmentosa and age-related macular degeneration. However, the death of rods and cones allows retinoids to permeate the inner retina, causing retinal ganglion cells to become spontaneously hyperactive, severely reducing the signal-to-noise ratio, and creating interference in the communication between the surviving retina and the brain. Treatments aimed at blocking or reducing hyperactivity improve vision initiated from surviving photoreceptors and could enhance the signal fidelity generated by vision restoration methodologies.
Use case determines the validity of neural systems comparisons
Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties
How Generative AI is Revolutionizing the Software Developer Industry
Generative AI is fundamentally transforming the software development industry by improving processes such as software testing, bug detection, bug fixes, and developer productivity. This talk explores how AI-driven techniques, particularly large language models (LLMs), are being utilized to generate realistic test scenarios, automate bug detection and repair, and streamline development workflows. As these technologies evolve, they promise to improve software quality and efficiency significantly. The discussion will cover key methodologies, challenges, and the future impact of generative AI on the software development lifecycle, offering a comprehensive overview of its revolutionary potential in the industry.
The quest for brain identification
In the 17th century, physician Marcello Malpighi observed the existence of distinctive patterns of ridges and sweat glands on fingertips. This was a major breakthrough, and originated a long and continuing quest for ways to uniquely identify individuals based on fingerprints, a technique massively used until today. It is only in the past few years that technologies and methodologies have achieved high-quality measures of an individual’s brain to the extent that personality traits and behavior can be characterized. The concept of “fingerprints of the brain” is very novel and has been boosted thanks to a seminal publication by Finn et al. in 2015. They were among the firsts to show that an individual’s functional brain connectivity profile is both unique and reliable, similarly to a fingerprint, and that it is possible to identify an individual among a large group of subjects solely on the basis of her or his connectivity profile. Yet, the discovery of brain fingerprints opened up a plethora of new questions. In particular, what exactly is the information encoded in brain connectivity patterns that ultimately leads to correctly differentiating someone’s connectome from anybody else’s? In other words, what makes our brains unique? In this talk I am going to partially address these open questions while keeping a personal viewpoint on the subject. I will outline the main findings, discuss potential issues, and propose future directions in the quest for identifiability of human brain networks.
Brain-heart interactions at the edges of consciousness
Various clinical cases have provided evidence linking cardiovascular, neurological, and psychiatric disorders to changes in the brain-heart interaction. Our recent experimental evidence on patients with disorders of consciousness revealed that observing brain-heart interactions helps to detect residual consciousness, even in patients with absence of behavioral signs of consciousness. Those findings support hypotheses suggesting that visceral activity is involved in the neurobiology of consciousness and sum to the existing evidence in healthy participants in which the neural responses to heartbeats reveal perceptual and self-consciousness. Furthermore, the presence of non-linear, complex, and bidirectional communication between brain and heartbeat dynamics can provide further insights into the physiological state of the patient following severe brain injury. These developments on methodologies to analyze brain-heart interactions open new avenues for understanding neural functioning at a large-scale level, uncovering that peripheral bodily activity can influence brain homeostatic processes, cognition, and behavior.
From controlled environments to complex realities: Exploring the interplay between perceived minds and attention
In our daily lives, we perceive things as possessing a mind (e.g., people) or lacking one (e.g., shoes). Intriguingly, how much mind we attribute to people can vary, with real people perceived to have more mind than depictions of individuals, such as photographs. Drawing from a range of research methodologies, including naturalistic observation, mobile eye tracking, and surreptitious behavior monitoring, I discuss how various shades of mind influence human attention and behaviour. The findings suggest the novel concept that overt attention (where one looks) in real-life is fundamentally supported by covert attention (attending to someone out of the corner of one's eye).
Hypothalamic episode generators underlying the neural control of fertility
The hypothalamus controls diverse homeostatic functions including fertility. Neural episode generators are required to drive the intermittent pulsatile and surge profiles of reproductive hormone secretion that control gonadal function. Studies in genetic mouse models have been fundamental in defining the neural circuits forming these central pattern generators and the full range of in vitro and in vivo optogenetic and chemogenetic methodologies have enabled investigation into their mechanism of action. The seminar will outline studies defining the hypothalamic “GnRH pulse generator network” and current understanding of its operation to drive pulsatile hormone secretion.
Diurnal rhythms of the eye
Do all components of the living human eye have a measurable diurnal rhythm? In this talk I will discuss methodologies and results of studies on adolescents and young adults. I will also touch upon the associations between diurnal rhythms of the eye and behavioral activities.
NMC4 Short Talk: Hypothesis-neutral response-optimized models of higher-order visual cortex reveal strong semantic selectivity
Modeling neural responses to naturalistic stimuli has been instrumental in advancing our understanding of the visual system. Dominant computational modeling efforts in this direction have been deeply rooted in preconceived hypotheses. In contrast, hypothesis-neutral computational methodologies with minimal apriorism which bring neuroscience data directly to bear on the model development process are likely to be much more flexible and effective in modeling and understanding tuning properties throughout the visual system. In this study, we develop a hypothesis-neutral approach and characterize response selectivity in the human visual cortex exhaustively and systematically via response-optimized deep neural network models. First, we leverage the unprecedented scale and quality of the recently released Natural Scenes Dataset to constrain parametrized neural models of higher-order visual systems and achieve novel predictive precision, in some cases, significantly outperforming the predictive success of state-of-the-art task-optimized models. Next, we ask what kinds of functional properties emerge spontaneously in these response-optimized models? We examine trained networks through structural ( feature visualizations) as well as functional analysis (feature verbalizations) by running `virtual' fMRI experiments on large-scale probe datasets. Strikingly, despite no category-level supervision, since the models are solely optimized for brain response prediction from scratch, the units in the networks after optimization act as detectors for semantic concepts like `faces' or `words', thereby providing one of the strongest evidences for categorical selectivity in these visual areas. The observed selectivity in model neurons raises another question: are the category-selective units simply functioning as detectors for their preferred category or are they a by-product of a non-category-specific visual processing mechanism? To investigate this, we create selective deprivations in the visual diet of these response-optimized networks and study semantic selectivity in the resulting `deprived' networks, thereby also shedding light on the role of specific visual experiences in shaping neuronal tuning. Together with this new class of data-driven models and novel model interpretability techniques, our study illustrates that DNN models of visual cortex need not be conceived as obscure models with limited explanatory power, rather as powerful, unifying tools for probing the nature of representations and computations in the brain.
Demystifying the richness of visual perception
Human vision is full of puzzles. Observers can grasp the essence of a scene in an instant, yet when probed for details they are at a loss. People have trouble finding their keys, yet they may be quite visible once found. How does one explain this combination of marvelous successes with quirky failures? I will describe our attempts to develop a unifying theory that brings a satisfying order to multiple phenomena. One key is to understand peripheral vision. A visual system cannot process everything with full fidelity, and therefore must lose some information. Peripheral vision must condense a mass of information into a succinct representation that nonetheless carries the information needed for vision at a glance. We have proposed that the visual system deals with limited capacity in part by representing its input in terms of a rich set of local image statistics, where the local regions grow — and the representation becomes less precise — with distance from fixation. This scheme trades off computation of sophisticated image features at the expense of spatial localization of those features. What are the implications of such an encoding scheme? Critical to our understanding has been the use of methodologies for visualizing the equivalence classes of the model. These visualizations allow one to quickly see that many of the puzzles of human vision may arise from a single encoding mechanism. They have suggested new experiments and predicted unexpected phenomena. Furthermore, visualization of the equivalence classes has facilitated the generation of testable model predictions, allowing us to study the effects of this relatively low-level encoding on a wide range of higher-level tasks. Peripheral vision helps explain many of the puzzles of vision, but some remain. By examining the phenomena that cannot be explained by peripheral vision, we gain insight into the nature of additional capacity limits in vision. In particular, I will suggest that decision processes face general-purpose limits on the complexity of the tasks they can perform at a given time.
Learning the structure and investigating the geometry of complex networks
Networks are widely used as mathematical models of complex systems across many scientific disciplines, and in particular within neuroscience. In this talk, we introduce two aspects of our collaborative research: (1) machine learning and networks, and (2) graph dimensionality. Machine learning and networks. Decades of work have produced a vast corpus of research characterising the topological, combinatorial, statistical and spectral properties of graphs. Each graph property can be thought of as a feature that captures important (and sometimes overlapping) characteristics of a network. We have developed hcga, a framework for highly comparative analysis of graph data sets that computes several thousands of graph features from any given network. Taking inspiration from hctsa, hcga offers a suite of statistical learning and data analysis tools for automated identification and selection of important and interpretable features underpinning the characterisation of graph data sets. We show that hcga outperforms other methodologies (including deep learning) on supervised classification tasks on benchmark data sets whilst retaining the interpretability of network features, which we exemplify on a dataset of neuronal morphologies images. Graph dimensionality. Dimension is a fundamental property of objects and the space in which they are embedded. Yet ideal notions of dimension, as in Euclidean spaces, do not always translate to physical spaces, which can be constrained by boundaries and distorted by inhomogeneities, or to intrinsically discrete systems such as networks. Deviating from approaches based on fractals, here, we present a new framework to define intrinsic notions of dimension on networks, the relative, local and global dimension. We showcase our method on various physical systems.
Computational psychophysics at the intersection of theory, data and models
Behavioural measurements are often overlooked by computational neuroscientists, who prefer to focus on electrophysiological recordings or neuroimaging data. This attitude is largely due to perceived lack of depth/richness in relation to behavioural datasets. I will show how contemporary psychophysics can deliver extremely rich and highly constraining datasets that naturally interface with computational modelling. More specifically, I will demonstrate how psychophysics can be used to guide/constrain/refine computational models, and how models can be exploited to design/motivate/interpret psychophysical experiments. Examples will span a wide range of topics (from feature detection to natural scene understanding) and methodologies (from cascade models to deep learning architectures).
Using marmosets for the study of the visual cortex: unique opportunities, and some pitfalls
Marmosets (Callithrix jacchus) are small South American monkeys which are being increasingly becoming adopted as animal models in neuroscience. Knowledge about the marmoset visual system has developed rapidly over the last decade. But what are the comparative advantages, and disadvantages involved in adopting this emerging model, as opposed to the more traditionally used macaque monkey? In this talk I will present case studies where the simpler brain morphology and short developmental cycle of the marmoset have been key factors in facilitating discoveries about the anatomy and physiology of the visual system. Although no single species provides the “ideal” animal model for invasive studies of the neural bases of visual processing, I argue that the development of robust methodologies for the study of the marmoset brain provides exciting opportunities to address long-standing problems in neuroscience.
The developing visual brain – answers and questions
We will start our talk with a short video of our research, illustrating methods (some old and new) and findings that have provided our current understanding of how visual capabilities develop in infancy and early childhood. However, our research poses some outstanding questions. We will briefly discuss three issues, which are linked by a common focus on the development of visual attentional processing: (1) How do recurrent cortical loops contribute to development? Cortical selectivity (e.g., to orientation, motion, and binocular disparity) develops in the early months of life. However, these systems are not purely feedforward but depend on parallel pathways, with recurrent feedback loops playing a critical role. The development of diverse networks, particularly for motion processing, may explain changes in dynamic responses and resolve developmental data obtained with different methodologies. One possible role for these loops is in top-down attentional control of visual processing. (2) Why do hyperopic infants become strabismic (cross-eyes)? Binocular interaction is a particularly sensitive area of development. Standard clinical accounts suppose that long-sighted (hyperopic) refractive errors require accommodative effort, putting stress on the accommodation-convergence link that leads to its breakdown and strabismus. Our large-scale population screening studies of 9-month infants question this: hyperopic infants are at higher risk of strabismus and impaired vision (amblyopia and impaired attention) but these hyperopic infants often under- rather than over-accommodate. This poor accommodation may reflect poor early attention processing, possibly a ‘soft sign’ of subtle cerebral dysfunction. (3) What do many neurodevelopmental disorders have in common? Despite similar cognitive demands, global motion perception is much more impaired than global static form across diverse neurodevelopmental disorders including Down and Williams Syndromes, Fragile-X, Autism, children with premature birth and infants with perinatal brain injury. These deficits in motion processing are associated with deficits in other dorsal stream functions such as visuo-motor co-ordination and attentional control, a cluster we have called ‘dorsal stream vulnerability’. However, our neuroimaging measures related to motion coherence in typically developing children suggest that the critical areas for individual differences in global motion sensitivity are not early motion-processing areas such as V5/MT, but downstream parietal and frontal areas for decision processes on motion signals. Although these brain networks may also underlie attentional and visuo-motor deficits , we still do not know when and how these deficits differ across different disorders and between individual children. Answering these questions provide necessary steps, not only increasing our scientific understanding of human visual brain development, but also in designing appropriate interventions to help each child achieve their full potential.
“Discovery of Novel Gain-of-Function Mutations Guided by Structure-Based Deep Learning”
Life of biological molecules spans time and length scales relevant at atomic to cellular time and length scales. Hence, novel molecular modeling approaches are required to be inherently multi-scale. Here we describe multiple methodologies developed in our laboratory: rapid discrete molecular dynamics simulation algorithm, protein design and structural refinement tools. Using these methodologies, we describe therapeutic strategies to combat this HIV and cancer, as well as design novel approaches for controlling proteins in living cells and organisms.