TopicNeuro

applications

50 Seminars5 ePosters

Latest

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 22, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 21, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 20, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 19, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 18, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 15, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 14, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 13, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 12, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 11, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

Probing White Matter Microstructure With Diffusion-Weighted MRI: Techniques and Applications in ADRD

Shruti Mishra
University of Michigan
Aug 7, 2024
SeminarNeuroscience

Generative models for video games (rescheduled)

Katja Hoffman
Microsoft Research
May 22, 2024

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.

SeminarNeuroscience

Volume measures in studies of hippocampal subfield structure: methodological considerations and applications

Roya Homayouni
May 20, 2024
SeminarNeuroscience

Generative models for video games

Katja Hoffman
Microsoft Research
May 1, 2024

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.

SeminarNeuroscience

Immature brain insults and possible effects on cholinergic system neuroplasticity

Psarropoulou Katerina
Dept of Biological Applications & Technology, University of Ioannina, Greece
Mar 27, 2024
SeminarNeuroscienceRecording

Brain network communication: concepts, models and applications

Caio Seguin
Indiana University
Aug 25, 2023

Understanding communication and information processing in nervous systems is a central goal of neuroscience. Over the past two decades, advances in connectomics and network neuroscience have opened new avenues for investigating polysynaptic communication in complex brain networks. Recent work has brought into question the mainstay assumption that connectome signalling occurs exclusively via shortest paths, resulting in a sprawling constellation of alternative network communication models. This Review surveys the latest developments in models of brain network communication. We begin by drawing a conceptual link between the mathematics of graph theory and biological aspects of neural signalling such as transmission delays and metabolic cost. We organize key network communication models and measures into a taxonomy, aimed at helping researchers navigate the growing number of concepts and methods in the literature. The taxonomy highlights the pros, cons and interpretations of different conceptualizations of connectome signalling. We showcase the utility of network communication models as a flexible, interpretable and tractable framework to study brain function by reviewing prominent applications in basic, cognitive and clinical neurosciences. Finally, we provide recommendations to guide the future development, application and validation of network communication models.

SeminarNeuroscienceRecording

AI-assisted language learning: Assessing learners who memorize and reason by analogy

Pierre-Alexandre Murena
University of Helsinki
Oct 5, 2022

Vocabulary learning applications like Duolingo have millions of users around the world, but yet are based on very simple heuristics to choose teaching material to provide to their users. In this presentation, we will discuss the possibility to develop more advanced artificial teachers, which would be based on modeling of the learner’s inner characteristics. In the case of teaching vocabulary, understanding how the learner memorizes is enough. When it comes to picking grammar exercises, it becomes essential to assess how the learner reasons, in particular by analogy. This second application will illustrate how analogical and case-based reasoning can be employed in an alternative way in education: not as the teaching algorithm, but as a part of the learner’s model.

SeminarNeuroscience

Feedforward and feedback processes in visual recognition

Thomas Serre
Brown University
Jun 22, 2022

Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.

SeminarNeuroscienceRecording

Alternative Applications of Foraging Theory

David Barack & Thomas Hills
University of Pennsylvania, University of Warwick
May 10, 2022
SeminarNeuroscienceRecording

Visualization and manipulation of our perception and imagery by BCI

Takufumi Yanagisawa
Osaka University
Apr 1, 2022

We have been developing Brain-Computer Interface (BCI) using electrocorticography (ECoG) [1] , which is recorded by electrodes implanted on brain surface, and magnetoencephalography (MEG) [2] , which records the cortical activities non-invasively, for the clinical applications. The invasive BCI using ECoG has been applied for severely paralyzed patient to restore the communication and motor function. The non-invasive BCI using MEG has been applied as a neurofeedback tool to modulate some pathological neural activities to treat some neuropsychiatric disorders. Although these techniques have been developed for clinical application, BCI is also an important tool to investigate neural function. For example, motor BCI records some neural activities in a part of the motor cortex to generate some movements of external devices. Although our motor system consists of complex system including motor cortex, basal ganglia, cerebellum, spinal cord and muscles, the BCI affords us to simplify the motor system with exactly known inputs, outputs and the relation of them. We can investigate the motor system by manipulating the parameters in BCI system. Recently, we are developing some BCIs to visualize and manipulate our perception and mental imagery. Although these BCI has been developed for clinical application, the BCI will be useful to understand our neural system to generate the perception and imagery. In this talk, I will introduce our study of phantom limb pain [3] , that is controlled by MEG-BCI, and the development of a communication BCI using ECoG [4] , that enable the subject to visualize the contents of their mental imagery. And I would like to discuss how much we can control our cortical activities that represent our perception and mental imagery. These examples demonstrate that BCI is a promising tool to visualize and manipulate the perception and imagery and to understand our consciousness. References 1. Yanagisawa, T., Hirata, M., Saitoh, Y., Kishima, H., Matsushita, K., Goto, T., Fukuma, R., Yokoi, H., Kamitani, Y., and Yoshimine, T. (2012). Electrocorticographic control of a prosthetic arm in paralyzed patients. AnnNeurol 71, 353-361. 2. Yanagisawa, T., Fukuma, R., Seymour, B., Hosomi, K., Kishima, H., Shimizu, T., Yokoi, H., Hirata, M., Yoshimine, T., Kamitani, Y., et al. (2016). Induced sensorimotor brain plasticity controls pain in phantom limb patients. Nature communications 7, 13209. 3. Yanagisawa, T., Fukuma, R., Seymour, B., Tanaka, M., Hosomi, K., Yamashita, O., Kishima, H., Kamitani, Y., and Saitoh, Y. (2020). BCI training to move a virtual hand reduces phantom limb pain: A randomized crossover trial. Neurology 95, e417-e426. 4. Ryohei Fukuma, Takufumi Yanagisawa, Shinji Nishimoto, Hidenori Sugano, Kentaro Tamura, Shota Yamamoto, Yasushi Iimura, Yuya Fujita, Satoru Oshino, Naoki Tani, Naoko Koide-Majima, Yukiyasu Kamitani, Haruhiko Kishima (2022). Voluntary control of semantic neural representations by imagery with conflicting visual stimulation. arXiv arXiv:2112.01223.

SeminarNeuroscienceRecording

Cross-modality imaging of the neural systems that support executive functions

Yaara Erez
Affiliate MRC Cognition and Brain Sciences Unit, University of Cambridge
Mar 1, 2022

Executive functions refer to a collection of mental processes such as attention, planning and problem solving, supported by a frontoparietal distributed brain network. These functions are essential for everyday life. Specifically in the context of patients with brain tumours there is a need to preserve them in order to enable good quality of life for patients. During surgeries for the removal of a brain tumour, the aim is to remove as much as possible of the tumour and at the same time prevent damage to the areas around it to preserve function and enable good quality of life for patients. In many cases, functional mapping is conducted during an awake surgery in order to identify areas critical for certain functions and avoid their surgical resection. While mapping is routinely done for functions such as movement and language, mapping executive functions is more challenging. Despite growing recognition in the importance of these functions for patient well-being in recent years, only a handful of studies addressed their intraoperative mapping. In the talk, I will present our new approach for mapping executive function areas using electrocorticography during awake brain surgery. These results will be complemented by neuroimaging data from healthy volunteers, directed at reliably localizing executive function regions in individuals using fMRI. I will also discuss more broadly challenges ofß using neuroimaging for neurosurgical applications. We aim to advance cross-modality neuroimaging of cognitive function which is pivotal to patient-tailored surgical interventions, and will ultimately lead to improved clinical outcomes.

SeminarNeuroscienceRecording

Taming chaos in neural circuits

Rainer Engelken
Columbia University
Feb 23, 2022

Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.

SeminarNeuroscience

From natural scene statistics to multisensory integration: experiments, models and applications

Cesare Parise
Oculus VR
Feb 9, 2022

To efficiently process sensory information, the brain relies on statistical regularities in the input. While generally improving the reliability of sensory estimates, this strategy also induces perceptual illusions that help reveal the underlying computational principles. Focusing on auditory and visual perception, in my talk I will describe how the brain exploits statistical regularities within and across the senses for the perception space, time and multisensory integration. In particular, I will show how results from a series of psychophysical experiments can be interpreted in the light of Bayesian Decision Theory, and I will demonstrate how such canonical computations can be implemented into simple and biologically plausible neural circuits. Finally, I will show how such principles of sensory information processing can be leveraged in virtual and augmented reality to overcome display limitations and expand human perception.

SeminarNeuroscience

What does the primary visual cortex tell us about object recognition?

Tiago Marques
MIT
Jan 24, 2022

Object recognition relies on the complex visual representations in cortical areas at the top of the ventral stream hierarchy. While these are thought to be derived from low-level stages of visual processing, this has not been shown, yet. Here, I describe the results of two projects exploring the contributions of primary visual cortex (V1) processing to object recognition using artificial neural networks (ANNs). First, we developed hundreds of ANN-based V1 models and evaluated how their single neurons approximate those in the macaque V1. We found that, for some models, single neurons in intermediate layers are similar to their biological counterparts, and that the distributions of their response properties approximately match those in V1. Furthermore, we observed that models that better matched macaque V1 were also more aligned with human behavior, suggesting that object recognition is derived from low-level. Motivated by these results, we then studied how an ANN’s robustness to image perturbations relates to its ability to predict V1 responses. Despite their high performance in object recognition tasks, ANNs can be fooled by imperceptibly small, explicitly crafted perturbations. We observed that ANNs that better predicted V1 neuronal activity were also more robust to adversarial attacks. Inspired by this, we developed VOneNets, a new class of hybrid ANN vision models. Each VOneNet contains a fixed neural network front-end that simulates primate V1 followed by a neural network back-end adapted from current computer vision models. After training, VOneNets were substantially more robust, outperforming state-of-the-art methods on a set of perturbations. While current neural network architectures are arguably brain-inspired, these results demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in computer vision applications and results in better models of the primate ventral stream and object recognition behavior.

SeminarNeuroscienceRecording

Mechanisms of sleep-seizure interactions in tuberous sclerosis and other mTORpathies

Michael Wong
Washigton University
Jan 5, 2022

An intriguing, relatively unexplored therapeutic avenue to investigate epilepsy is the interaction of sleep mechanisms and seizures. Multiple lines of clinical observations suggest a strong, bi-directional relationship between epilepsy and sleep. Epilepsy and sleep disorders are common comorbidities. Seizures occur more commonly in sleep in many types of epilepsy, and in turn, seizures can cause disrupted sleep. Sudden unexplained death in epilepsy (SUDEP) is strongly associated with sleep. The biological mechanisms underlying this relationship between seizures and sleep are poorly understood, but if better delineated, could offer novel therapeutic approaches to treating both epilepsy and sleep disorders. In this presentation, I will explore this sleep-seizure relationship in mouse models of epilepsy. First, I will present general approaches for performing detailed longitudinal sleep and vigilance state analysis in mice, including pre-weanling neonatal mice. I will then discuss recent data from my laboratory demonstrating an abnormal sleep phenotype in a mouse model of the genetic epilepsy, tuberous sclerosis complex (TSC), and its relationship to seizures. The potential mechanistic basis of sleep abnormalities and sleep-seizure interactions in this TSC model will be investigated, focusing on the role of the mechanistic target of rapamycin (mTOR) pathway and hypothalamic orexin, with potential therapeutic applications of mTOR inhibitors and orexin antagonists. Finally, similar sleep-seizure interactions and mechanisms will be extended to models of acquired epilepsy due to status epilepticus-related brain injury.

SeminarNeuroscience

Adaptive Deep Brain Stimulation: Investigational System Development at the Edge of Clinical Brain Computer Interfacing

Jeffrey Herron
University of Washington
Dec 16, 2021

Over the last few decades, the use of deep brain stimulation (DBS) to improve the treatment of those with neurological movement disorders represents a critical success story in the development of invasive neurotechnology and the promise of brain-computer interfaces (BCI) to improve the lives of those suffering from incurable neurological disorders. In the last decade, investigational devices capable of recording and streaming neural activity from chronically implanted therapeutic electrodes has supercharged research into clinical applications of BCI, enabling in-human studies investigating the use of adaptive stimulation algorithms to further enhance therapeutic outcomes and improve future device performance. In this talk, Dr. Herron will review ongoing clinical research efforts in the field of adaptive DBS systems and algorithms. This will include an overview of DBS in current clinical practice, the development of bidirectional clinical-use research platforms, ongoing algorithm evaluation efforts, a discussion of current adoption barriers to be addressed in future work.

SeminarNeuroscienceRecording

Edge Computing using Spiking Neural Networks

Shirin Dora
Loughborough University
Nov 5, 2021

Deep learning has made tremendous progress in the last year but it's high computational and memory requirements impose challenges in using deep learning on edge devices. There has been some progress in lowering memory requirements of deep neural networks (for instance, use of half-precision) but there has been minimal effort in developing alternative efficient computational paradigms. Inspired by the brain, Spiking Neural Networks (SNN) provide an energy-efficient alternative to conventional rate-based neural networks. However, SNN architectures that employ the traditional feedforward and feedback pass do not fully exploit the asynchronous event-based processing paradigm of SNNs. In the first part of my talk, I will present my work on predictive coding which offers a fundamentally different approach to developing neural networks that are particularly suitable for event-based processing. In the second part of my talk, I will present our work on development of approaches for SNNs that target specific problems like low response latency and continual learning. References Dora, S., Bohte, S. M., & Pennartz, C. (2021). Deep Gated Hebbian Predictive Coding Accounts for Emergence of Complex Neural Response Properties Along the Visual Cortical Hierarchy. Frontiers in Computational Neuroscience, 65. Saranirad, V., McGinnity, T. M., Dora, S., & Coyle, D. (2021, July). DoB-SNN: A New Neuron Assembly-Inspired Spiking Neural Network for Pattern Classification. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp. 1-6). IEEE. Machingal, P., Thousif, M., Dora, S., Sundaram, S., Meng, Q. (2021). A Cross Entropy Loss for Spiking Neural Networks. Expert Systems with Applications (under review).

SeminarNeuroscience

Improving Communication With the Brain Through Electrode Technologies

Rylie Green
Imperial College London
Oct 27, 2021

Over the past 30 years bionic devices such as cochlear implants and pacemakers, have used a small number of metal electrodes to restore function and monitor activity in patients following disease or injury of excitable tissues. Growing interest in neurotechnologies, facilitated by ventures such as BrainGate, Neuralink and the European Human Brain Project, has increased public awareness of electrotherapeutics and led to both new applications for bioelectronics and a growing demand for less invasive devices with improved performance. Coupled with the rapid miniaturisation of electronic chips, bionic devices are now being developed to diagnose and treat a wide variety of neural and muscular disorders. Of particular interest is the area of high resolution devices that require smaller, more densely packed electrodes. Due to poor integration and communication with body tissue, conventional metallic electrodes cannot meet these size and spatial requirements. We have developed a range of polymer based electronic materials including conductive hydrogels (CHs), conductive elastomers (CEs) and living electrodes (LEs). These technologies provide synergy between low impedance charge transfer, reduced stiffness and an ability to be provide a biologically active interface. A range of electrode approaches are presented spanning wearables, implantables and drug delivery devices. This talk outlines the materials development and characterisation of both in vitro properties and translational in vivo performance. The challenges for translation and commercial uptake of novel technologies will also be discussed.

SeminarNeuroscienceRecording

In vitro bioelectronic models of the gut-brain axis

Róisín Owens
Department of Chemical Engineering and Biotechnology, University of Cambridge
Oct 19, 2021

The human gut microbiome has emerged as a key player in the bidirectional communication of the gut-brain axis, affecting various aspects of homeostasis and pathophysiology. Until recently, the majority of studies that seek to explore the mechanisms underlying the microbiome-gut-brain axis cross-talk relied almost exclusively on animal models, and particularly gnotobiotic mice. Despite the great progress made with these models, various limitations, including ethical considerations and interspecies differences that limit the translatability of data to human systems, pushed researchers to seek for alternatives. Over the past decades, the field of in vitro modelling of tissues has experienced tremendous growth, thanks to advances in 3D cell biology, materials, science and bioengineering, pushing further the borders of our ability to more faithfully emulate the in vivo situation. Organ-on-chip technology and bioengineered tissues have emerged as highly promising alternatives to animal models for a wide range of applications. In this talk I’ll discuss our progress towards generating a complete platform of the human microbiota-gut-brain axis with integrated monitoring and sensing capabilities. Bringing together principles of materials science, tissue engineering, 3D cell biology and bioelectronics, we are building advanced models of the GI and the BBB /NVU, with real-time and label-free monitoring units adapted in the model architecture, towards a robust and more physiologically relevant human in vitro model, aiming to i) elucidate the role of microbiota in the gut-brain axis communication, ii) to study how diet and impaired microbiota profiles affect various (patho-)physiologies, and iii) to test personalised medicine approaches for disease modelling and drug testing.

SeminarNeuroscienceRecording

Swarms for people

Sabine Hauert
University of Bristol
Oct 8, 2021

As tiny robots become individually more sophisticated, and larger robots easier to mass produce, a breakdown of conventional disciplinary silos is enabling swarm engineering to be adopted across scales and applications, from nanomedicine to treat cancer, to cm-sized robots for large-scale environmental monitoring or intralogistics. This convergence of capabilities is facilitating the transfer of lessons learned from one scale to the other. Cm-sized robots that work in the 1000s may operate in a way similar to reaction-diffusion systems at the nanoscale, while sophisticated microrobots may have individual capabilities that allow them to achieve swarm behaviour reminiscent of larger robots with memory, computation, and communication. Although the physics of these systems are fundamentally different, much of their emergent swarm behaviours can be abstracted to their ability to move and react to their local environment. This presents an opportunity to build a unified framework for the engineering of swarms across scales that makes use of machine learning to automatically discover suitable agent designs and behaviours, digital twins to seamlessly move between the digital and physical world, and user studies to explore how to make swarms safe and trustworthy. Such a framework would push the envelope of swarm capabilities, towards making swarms for people.

SeminarNeuroscienceRecording

Multisensory Integration: Development, Plasticity, and Translational Applications

Benjamin A. Rowland
Wake Forest School of Medicine
Sep 21, 2021
SeminarNeuroscience

The Challenge and Opportunities of Mapping Cortical Layer Activity and Connectivity with fMRI

Peter Bandettini
NIMH
Jul 9, 2021

In this talk I outline the technical challenges and current solutions to layer fMRI. Specifically, I describe our acquisition strategies for maximizing resolution, spatial coverage, time efficiency as well as, perhaps most importantly, vascular specificity. Novel applications from our group, including mapping feedforward and feedback connections to M1 during task and sensory input modulation and S1 during a sensory prediction task are be shown. Layer specific activity in dorsal lateral prefrontal cortex during a working memory task is also demonstrated. Additionally, I’ll show preliminary work on mapping whole brain layer-specific resting state connectivity and hierarchy.

SeminarNeuroscience

Learning to perceive with new sensory signals

Marko Nardini
Durham University
May 19, 2021

I will begin by describing recent research taking a new, model-based approach to perceptual development. This approach uncovers fundamental changes in information processing underlying the protracted development of perception, action, and decision-making in childhood. For example, integration of multiple sensory estimates via reliability-weighted averaging – widely used by adults to improve perception – is often not seen until surprisingly late into childhood, as assessed by both behaviour and neural representations. This approach forms the basis for a newer question: the scope for the nervous system to deploy useful computations (e.g. reliability-weighted averaging) to optimise perception and action using newly-learned sensory signals provided by technology. Our initial model system is augmenting visual depth perception with devices translating distance into auditory or vibro-tactile signals. This problem has immediate applications to people with partial vision loss, but the broader question concerns our scope to use technology to tune in to any signal not available to our native biological receptors. I will describe initial progress on this problem, and our approach to operationalising what it might mean to adopt a new signal comparably to a native sense. This will include testing for its integration (weighted averaging) alongside the native senses, assessing the level at which this integration happens in the brain, and measuring the degree of ‘automaticity’ with which new signals are used, compared with native perception.

SeminarNeuroscience

A mechanism for interareal coherence based on connectivity and power - applications to LGN-V1 and frontoparietal interactions

Martin Vinck
Ernst Strüngmann Institute (ESI) for Neuroscience
May 17, 2021
SeminarNeuroscienceRecording

Recurrent network dynamics lead to interference in sequential learning

Friedrich Schuessler
Barak lab, Technion, Haifa, Israel
Apr 29, 2021

Learning in real life is often sequential: A learner first learns task A, then task B. If the tasks are related, the learner may adapt the previously learned representation instead of generating a new one from scratch. Adaptation may ease learning task B but may also decrease the performance on task A. Such interference has been observed in experimental and machine learning studies. In the latter case, it is mediated by correlations between weight updates for the different tasks. In typical applications, like image classification with feed-forward networks, these correlated weight updates can be traced back to input correlations. For many neuroscience tasks, however, networks need to not only transform the input, but also generate substantial internal dynamics. Here we illuminate the role of internal dynamics for interference in recurrent neural networks (RNNs). We analyze RNNs trained sequentially on neuroscience tasks with gradient descent and observe forgetting even for orthogonal tasks. We find that the degree of interference changes systematically with tasks properties, especially with emphasis on input-driven over autonomously generated dynamics. To better understand our numerical observations, we thoroughly analyze a simple model of working memory: For task A, a network is presented with an input pattern and trained to generate a fixed point aligned with this pattern. For task B, the network has to memorize a second, orthogonal pattern. Adapting an existing representation corresponds to the rotation of the fixed point in phase space, as opposed to the emergence of a new one. We show that the two modes of learning – rotation vs. new formation – are directly linked to recurrent vs. input-driven dynamics. We make this notion precise in a further simplified, analytically tractable model, where learning is restricted to a 2x2 matrix. In our analysis of trained RNNs, we also make the surprising observation that, across different tasks, larger random initial connectivity reduces interference. Analyzing the fixed point task reveals the underlying mechanism: The random connectivity strongly accelerates the learning mode of new formation, and has less effect on rotation. The prior thus wins the race to zero loss, and interference is reduced. Altogether, our work offers a new perspective on sequential learning in recurrent networks, and the emphasis on internally generated dynamics allows us to take the history of individual learners into account.

SeminarNeuroscienceRecording

Applications of Multisensory Facilitation of Learning

Aaron Seitz
University of California, Riverside
Apr 15, 2021

In this talk I’ll discuss translation of findings of multisensory facilitation of learning to cognitive training. I’ll first review some early findings of multisensory facilitation of learning and then discuss how we have been translating these basic science approaches into gamified training interventions to improve cognitive functions. I’ll touch on approaches to training vision, hearing and working memory that we are developing at the UCR Brain Game Center for Mental Fitness and Well-being. I look forward to discussing both the basic science but also the complexities of how to translate approaches from basic science into the more complex frameworks often used in interventions.

SeminarNeuroscienceRecording

Towards a Translational Neuroscience of Consciousness

Hakwan Lau
UCLA Psychology Department
Mar 25, 2021

The cognitive neuroscience of conscious perception has seen considerable growth over the past few decades. Confirming an influential hypothesis driven by earlier studies of neuropsychological patients, we have found that the lateral and polar prefrontal cortices play important causal roles in the generation of subjective experiences. However, this basic empirical finding has been hotly contested by researchers with different theoretical commitments, and the differences are at times difficult to resolve. To address the controversies, I suggest one alternative venue may be to look for clinical applications derived from current theories. I outline an example in which we used closed-loop fMRI combined with machine learning to nonconsciously manipulate the physiological responses to threatening stimuli, such as spiders or snakes. A clinical trial involving patients with phobia is currently taking place. I also outline how this theoretical framework may be extended to other diseases. Ultimately, a truly meaningful understanding of the fundamental nature of our mental existence should lead to useful insights for our colleagues on the clinical frontlines. If we use this as a yardstick, whoever loses the esoteric theoretical debates, both science and the patients will always win.

ePosterNeuroscience

Computer vision and image processing applications on astrocyte-glioma interactions in 3D cell culture

Banu Erdem, Nilüfar Ismayilzada, Gökhan Bora Esmer, Emel Sokullu

FENS Forum 2024

ePosterNeuroscience

Identifying key structural connections from functional response data: theory & applications

Tirthabir Biswas,Tianzhi Lambus,James Fitzgerald

COSYNE 2022

ePosterNeuroscience

Identifying key structural connections from functional response data: theory & applications

Tirthabir Biswas,Tianzhi Lambus,James Fitzgerald

COSYNE 2022

ePosterNeuroscience

Extracellular vesicles and transmission of α-synuclein pathology: From cellular models to diagnostic applications

Diana Mjartinová, Karolína Albertusová, Miraj Ud Din Momand, Ľubica Fialová, Dominika Fričová

FENS Forum 2024

ePosterNeuroscience

Review of applications of graph theory and network neuroscience in the development of artificial neural networks

Jan Bendyk

Neuromatch 5

applications coverage

55 items

Seminar50
ePoster5
Domain spotlight

Explore how applications research is advancing inside Neuro.

Visit domain