← Back

Applications

Topic spotlight
TopicWorld Wide

applications

Discover seminars, jobs, and research tagged with applications across World Wide.
65 curated items60 Seminars5 ePosters
Updated 12 days ago
65 items · applications
65 results
SeminarOpen Source

Computational bio-imaging via inverse scattering

Shwetadwip Chowdhury
Assistant Professor, University of Texas at Austin
Nov 24, 2025

Optical imaging is a major research tool in the basic sciences, and is the only imaging modality that routinely enables non-ionized imaging with subcellular spatial resolutions and high imaging speeds. In biological imaging applications, however, optical imaging is limited by tissue scattering to short imaging depths. This prevents large-scale bio-imaging by allowing visualization of only the outer superficial layers of an organism, or specific components isolated from within the organism and prepared in-vitro.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 21, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 20, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 19, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 18, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 17, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 14, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 13, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 12, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 11, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 10, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarOpen Source

Open SPM: A Modular Framework for Scanning Probe Microscopy

Marcos Penedo Garcia
Senior scientist, LBNI-IBI, EPFL Lausanne, Switzerland
Jun 23, 2025

OpenSPM aims to democratize innovation in the field of scanning probe microscopy (SPM), which is currently dominated by a few proprietary, closed systems that limit user-driven development. Our platform includes a high-speed OpenAFM head and base optimized for small cantilevers, an OpenAFM controller, a high-voltage amplifier, and interfaces compatible with several commercial AFM systems such as the Bruker Multimode, Nanosurf DriveAFM, Witec Alpha SNOM, Zeiss FIB-SEM XB550, and Nenovision Litescope. We have created a fully documented and community-driven OpenSPM platform, with training resources and sourcing information, which has already enabled the construction of more than 15 systems outside our lab. The controller is integrated with open-source tools like Gwyddion, HDF5, and Pycroscopy. We have also engaged external companies, two of which are integrating our controller into their products or interfaces. We see growing interest in applying parts of the OpenSPM platform to related techniques such as correlated microscopy, nanoindentation, and scanning electron/confocal microscopy. To support this, we are developing more generic and modular software, alongside a structured development workflow. A key feature of the OpenSPM system is its Python-based API, which makes the platform fully scriptable and ideal for AI and machine learning applications. This enables, for instance, automatic control and optimization of PID parameters, setpoints, and experiment workflows. With a growing contributor base and industry involvement, OpenSPM is well positioned to become a global, open platform for next-generation SPM innovation.

SeminarOpen Source

“A Focus on 3D Printed Lenses: Rapid prototyping, low-cost microscopy and enhanced imaging for the life sciences”

Liam Rooney
University of Glasgow
May 21, 2025

High-quality glass lenses are commonplace in the design of optical instrumentation used across the biosciences. However, research-grade glass lenses are often costly, delicate and, depending on the prescription, can involve intricate and lengthy manufacturing - even more so in bioimaging applications. This seminar will outline 3D printing as a viable low-cost alternative for the manufacture of high-performance optical elements, where I will also discuss the creation of the world’s first fully 3D printed microscope and other implementations of 3D printed lenses. Our 3D printed lenses were generated using consumer-grade 3D printers and pose a 225x materials cost-saving compared to glass optics. Moreover, they can be produced in any lab or home environment and offer great potential for education and outreach. Following performance validation, our 3D printed optics were implemented in the production of a fully 3D printed microscope and demonstrated in histological imaging applications. We also applied low-cost fabrication methods to exotic lens geometries to enhance resolution and contrast across spatial scales and reveal new biological structures. Across these applications, our findings showed that 3D printed lenses are a viable substitute for commercial glass lenses, with the advantage of being relatively low-cost, accessible, and suitable for use in optical instruments. Combining 3D printed lenses with open-source 3D printed microscope chassis designs opens the doors for low-cost applications for rapid prototyping, low-resource field diagnostics, and the creation of cheap educational tools.

SeminarNeuroscience

Probing White Matter Microstructure With Diffusion-Weighted MRI: Techniques and Applications in ADRD

Shruti Mishra
University of Michigan
Aug 6, 2024
SeminarNeuroscience

Generative models for video games (rescheduled)

Katja Hoffman
Microsoft Research
May 21, 2024

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.

SeminarNeuroscience

Volume measures in studies of hippocampal subfield structure: methodological considerations and applications

Roya Homayouni
May 19, 2024
SeminarNeuroscience

Generative models for video games

Katja Hoffman
Microsoft Research
Apr 30, 2024

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.

SeminarNeuroscience

Immature brain insults and possible effects on cholinergic system neuroplasticity

Psarropoulou Katerina
Dept of Biological Applications & Technology, University of Ioannina, Greece
Mar 26, 2024
SeminarPsychology

Where Cognitive Neuroscience Meets Industry: Navigating the Intersections of Academia and Industry

Mirta Stantic
Royal Holloway, University of London
Feb 18, 2024

In this talk, Mirta will share her journey from her education a mathematically-focused high school to her currently unconventional career in London, emphasizing the evolution from a local education in Croatia to international experiences in the US and UK. We will explore the concept of interdisciplinary careers in the modern world, viewing them through the framework of increasing demand, flexibility, and dynamism in the current workplace. We will underscore the significance of interdisciplinary research for launching careers outside of academia, and bolstering those within. I will challenge the conventional norm of working either in academia or industry, and encourage discussion about the opportunities for combining the two in a myriad of career opportunities. I’ll use examples from my own and others’ research to highlight opportunities for early career researchers to extend their work into practical applications. Such an approach leverages the strengths of both sectors, fostering innovation and practical applications of research findings. I hope these insights can offer valuable perspectives for those looking to navigate the evolving demands of the global job market, illustrating the advantages of a versatile skill set that spans multiple disciplines and allows extensions into exciting career options.

SeminarArtificial IntelligenceRecording

Mathematical and computational modelling of ocular hemodynamics: from theory to applications

Giovanna Guidoboni
University of Maine
Nov 13, 2023

Changes in ocular hemodynamics may be indicative of pathological conditions in the eye (e.g. glaucoma, age-related macular degeneration), but also elsewhere in the body (e.g. systemic hypertension, diabetes, neurodegenerative disorders). Thanks to its transparent fluids and structures that allow the light to go through, the eye offers a unique window on the circulation from large to small vessels, and from arteries to veins. Deciphering the causes that lead to changes in ocular hemodynamics in a specific individual could help prevent vision loss as well as aid in the diagnosis and management of diseases beyond the eye. In this talk, we will discuss how mathematical and computational modelling can help in this regard. We will focus on two main factors, namely blood pressure (BP), which drives the blood flow through the vessels, and intraocular pressure (IOP), which compresses the vessels and may impede the flow. Mechanism-driven models translates fundamental principles of physics and physiology into computable equations that allow for identification of cause-to-effect relationships among interplaying factors (e.g. BP, IOP, blood flow). While invaluable for causality, mechanism-driven models are often based on simplifying assumptions to make them tractable for analysis and simulation; however, this often brings into question their relevance beyond theoretical explorations. Data-driven models offer a natural remedy to address these short-comings. Data-driven methods may be supervised (based on labelled training data) or unsupervised (clustering and other data analytics) and they include models based on statistics, machine learning, deep learning and neural networks. Data-driven models naturally thrive on large datasets, making them scalable to a plethora of applications. While invaluable for scalability, data-driven models are often perceived as black- boxes, as their outcomes are difficult to explain in terms of fundamental principles of physics and physiology and this limits the delivery of actionable insights. The combination of mechanism-driven and data-driven models allows us to harness the advantages of both, as mechanism-driven models excel at interpretability but suffer from a lack of scalability, while data-driven models are excellent at scale but suffer in terms of generalizability and insights for hypothesis generation. This combined, integrative approach represents the pillar of the interdisciplinary approach to data science that will be discussed in this talk, with application to ocular hemodynamics and specific examples in glaucoma research.

SeminarNeuroscienceRecording

Brain network communication: concepts, models and applications

Caio Seguin
Indiana University
Aug 23, 2023

Understanding communication and information processing in nervous systems is a central goal of neuroscience. Over the past two decades, advances in connectomics and network neuroscience have opened new avenues for investigating polysynaptic communication in complex brain networks. Recent work has brought into question the mainstay assumption that connectome signalling occurs exclusively via shortest paths, resulting in a sprawling constellation of alternative network communication models. This Review surveys the latest developments in models of brain network communication. We begin by drawing a conceptual link between the mathematics of graph theory and biological aspects of neural signalling such as transmission delays and metabolic cost. We organize key network communication models and measures into a taxonomy, aimed at helping researchers navigate the growing number of concepts and methods in the literature. The taxonomy highlights the pros, cons and interpretations of different conceptualizations of connectome signalling. We showcase the utility of network communication models as a flexible, interpretable and tractable framework to study brain function by reviewing prominent applications in basic, cognitive and clinical neurosciences. Finally, we provide recommendations to guide the future development, application and validation of network communication models.

SeminarArtificial IntelligenceRecording

Diverse applications of artificial intelligence and mathematical approaches in ophthalmology

Tiarnán Keenan
National Eye Institute (NEI)
Jun 5, 2023

Ophthalmology is ideally placed to benefit from recent advances in artificial intelligence. It is a highly image-based specialty and provides unique access to the microvascular circulation and the central nervous system. This talk will demonstrate diverse applications of machine learning and deep learning techniques in ophthalmology, including in age-related macular degeneration (AMD), the leading cause of blindness in industrialized countries, and cataract, the leading cause of blindness worldwide. This will include deep learning approaches to automated diagnosis, quantitative severity classification, and prognostic prediction of disease progression, both from images alone and accompanied by demographic and genetic information. The approaches discussed will include deep feature extraction, label transfer, and multi-modal, multi-task training. Cluster analysis, an unsupervised machine learning approach to data classification, will be demonstrated by its application to geographic atrophy in AMD, including exploration of genotype-phenotype relationships. Finally, mediation analysis will be discussed, with the aim of dissecting complex relationships between AMD disease features, genotype, and progression.

SeminarPsychology

How AI is advancing Clinical Neuropsychology and Cognitive Neuroscience

Nicolas Langer
University of Zurich
May 16, 2023

This talk aims to highlight the immense potential of Artificial Intelligence (AI) in advancing the field of psychology and cognitive neuroscience. Through the integration of machine learning algorithms, big data analytics, and neuroimaging techniques, AI has the potential to revolutionize the way we study human cognition and brain characteristics. In this talk, I will highlight our latest scientific advancements in utilizing AI to gain deeper insights into variations in cognitive performance across the lifespan and along the continuum from healthy to pathological functioning. The presentation will showcase cutting-edge examples of AI-driven applications, such as deep learning for automated scoring of neuropsychological tests, natural language processing to characeterize semantic coherence of patients with psychosis, and other application to diagnose and treat psychiatric and neurological disorders. Furthermore, the talk will address the challenges and ethical considerations associated with using AI in psychological research, such as data privacy, bias, and interpretability. Finally, the talk will discuss future directions and opportunities for further advancements in this dynamic field.

SeminarNeuroscienceRecording

AI-assisted language learning: Assessing learners who memorize and reason by analogy

Pierre-Alexandre Murena
University of Helsinki
Oct 5, 2022

Vocabulary learning applications like Duolingo have millions of users around the world, but yet are based on very simple heuristics to choose teaching material to provide to their users. In this presentation, we will discuss the possibility to develop more advanced artificial teachers, which would be based on modeling of the learner’s inner characteristics. In the case of teaching vocabulary, understanding how the learner memorizes is enough. When it comes to picking grammar exercises, it becomes essential to assess how the learner reasons, in particular by analogy. This second application will illustrate how analogical and case-based reasoning can be employed in an alternative way in education: not as the teaching algorithm, but as a part of the learner’s model.

SeminarNeuroscience

Feedforward and feedback processes in visual recognition

Thomas Serre
Brown University
Jun 21, 2022

Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.

SeminarNeuroscienceRecording

Alternative Applications of Foraging Theory

David Barack & Thomas Hills
University of Pennsylvania, University of Warwick
May 9, 2022
SeminarNeuroscienceRecording

Visualization and manipulation of our perception and imagery by BCI

Takufumi Yanagisawa
Osaka University
Mar 31, 2022

We have been developing Brain-Computer Interface (BCI) using electrocorticography (ECoG) [1] , which is recorded by electrodes implanted on brain surface, and magnetoencephalography (MEG) [2] , which records the cortical activities non-invasively, for the clinical applications. The invasive BCI using ECoG has been applied for severely paralyzed patient to restore the communication and motor function. The non-invasive BCI using MEG has been applied as a neurofeedback tool to modulate some pathological neural activities to treat some neuropsychiatric disorders. Although these techniques have been developed for clinical application, BCI is also an important tool to investigate neural function. For example, motor BCI records some neural activities in a part of the motor cortex to generate some movements of external devices. Although our motor system consists of complex system including motor cortex, basal ganglia, cerebellum, spinal cord and muscles, the BCI affords us to simplify the motor system with exactly known inputs, outputs and the relation of them. We can investigate the motor system by manipulating the parameters in BCI system. Recently, we are developing some BCIs to visualize and manipulate our perception and mental imagery. Although these BCI has been developed for clinical application, the BCI will be useful to understand our neural system to generate the perception and imagery. In this talk, I will introduce our study of phantom limb pain [3] , that is controlled by MEG-BCI, and the development of a communication BCI using ECoG [4] , that enable the subject to visualize the contents of their mental imagery. And I would like to discuss how much we can control our cortical activities that represent our perception and mental imagery. These examples demonstrate that BCI is a promising tool to visualize and manipulate the perception and imagery and to understand our consciousness. References 1. Yanagisawa, T., Hirata, M., Saitoh, Y., Kishima, H., Matsushita, K., Goto, T., Fukuma, R., Yokoi, H., Kamitani, Y., and Yoshimine, T. (2012). Electrocorticographic control of a prosthetic arm in paralyzed patients. AnnNeurol 71, 353-361. 2. Yanagisawa, T., Fukuma, R., Seymour, B., Hosomi, K., Kishima, H., Shimizu, T., Yokoi, H., Hirata, M., Yoshimine, T., Kamitani, Y., et al. (2016). Induced sensorimotor brain plasticity controls pain in phantom limb patients. Nature communications 7, 13209. 3. Yanagisawa, T., Fukuma, R., Seymour, B., Tanaka, M., Hosomi, K., Yamashita, O., Kishima, H., Kamitani, Y., and Saitoh, Y. (2020). BCI training to move a virtual hand reduces phantom limb pain: A randomized crossover trial. Neurology 95, e417-e426. 4. Ryohei Fukuma, Takufumi Yanagisawa, Shinji Nishimoto, Hidenori Sugano, Kentaro Tamura, Shota Yamamoto, Yasushi Iimura, Yuya Fujita, Satoru Oshino, Naoki Tani, Naoko Koide-Majima, Yukiyasu Kamitani, Haruhiko Kishima (2022). Voluntary control of semantic neural representations by imagery with conflicting visual stimulation. arXiv arXiv:2112.01223.

SeminarNeuroscienceRecording

Cross-modality imaging of the neural systems that support executive functions

Yaara Erez
Affiliate MRC Cognition and Brain Sciences Unit, University of Cambridge
Feb 28, 2022

Executive functions refer to a collection of mental processes such as attention, planning and problem solving, supported by a frontoparietal distributed brain network. These functions are essential for everyday life. Specifically in the context of patients with brain tumours there is a need to preserve them in order to enable good quality of life for patients. During surgeries for the removal of a brain tumour, the aim is to remove as much as possible of the tumour and at the same time prevent damage to the areas around it to preserve function and enable good quality of life for patients. In many cases, functional mapping is conducted during an awake surgery in order to identify areas critical for certain functions and avoid their surgical resection. While mapping is routinely done for functions such as movement and language, mapping executive functions is more challenging. Despite growing recognition in the importance of these functions for patient well-being in recent years, only a handful of studies addressed their intraoperative mapping. In the talk, I will present our new approach for mapping executive function areas using electrocorticography during awake brain surgery. These results will be complemented by neuroimaging data from healthy volunteers, directed at reliably localizing executive function regions in individuals using fMRI. I will also discuss more broadly challenges ofß using neuroimaging for neurosurgical applications. We aim to advance cross-modality neuroimaging of cognitive function which is pivotal to patient-tailored surgical interventions, and will ultimately lead to improved clinical outcomes.

SeminarNeuroscienceRecording

Taming chaos in neural circuits

Rainer Engelken
Columbia University
Feb 22, 2022

Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.

SeminarNeuroscience

From natural scene statistics to multisensory integration: experiments, models and applications

Cesare Parise
Oculus VR
Feb 8, 2022

To efficiently process sensory information, the brain relies on statistical regularities in the input. While generally improving the reliability of sensory estimates, this strategy also induces perceptual illusions that help reveal the underlying computational principles. Focusing on auditory and visual perception, in my talk I will describe how the brain exploits statistical regularities within and across the senses for the perception space, time and multisensory integration. In particular, I will show how results from a series of psychophysical experiments can be interpreted in the light of Bayesian Decision Theory, and I will demonstrate how such canonical computations can be implemented into simple and biologically plausible neural circuits. Finally, I will show how such principles of sensory information processing can be leveraged in virtual and augmented reality to overcome display limitations and expand human perception.

SeminarNeuroscience

What does the primary visual cortex tell us about object recognition?

Tiago Marques
MIT
Jan 23, 2022

Object recognition relies on the complex visual representations in cortical areas at the top of the ventral stream hierarchy. While these are thought to be derived from low-level stages of visual processing, this has not been shown, yet. Here, I describe the results of two projects exploring the contributions of primary visual cortex (V1) processing to object recognition using artificial neural networks (ANNs). First, we developed hundreds of ANN-based V1 models and evaluated how their single neurons approximate those in the macaque V1. We found that, for some models, single neurons in intermediate layers are similar to their biological counterparts, and that the distributions of their response properties approximately match those in V1. Furthermore, we observed that models that better matched macaque V1 were also more aligned with human behavior, suggesting that object recognition is derived from low-level. Motivated by these results, we then studied how an ANN’s robustness to image perturbations relates to its ability to predict V1 responses. Despite their high performance in object recognition tasks, ANNs can be fooled by imperceptibly small, explicitly crafted perturbations. We observed that ANNs that better predicted V1 neuronal activity were also more robust to adversarial attacks. Inspired by this, we developed VOneNets, a new class of hybrid ANN vision models. Each VOneNet contains a fixed neural network front-end that simulates primate V1 followed by a neural network back-end adapted from current computer vision models. After training, VOneNets were substantially more robust, outperforming state-of-the-art methods on a set of perturbations. While current neural network architectures are arguably brain-inspired, these results demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in computer vision applications and results in better models of the primate ventral stream and object recognition behavior.

SeminarNeuroscienceRecording

Mechanisms of sleep-seizure interactions in tuberous sclerosis and other mTORpathies

Michael Wong
Washigton University
Jan 4, 2022

An intriguing, relatively unexplored therapeutic avenue to investigate epilepsy is the interaction of sleep mechanisms and seizures. Multiple lines of clinical observations suggest a strong, bi-directional relationship between epilepsy and sleep. Epilepsy and sleep disorders are common comorbidities. Seizures occur more commonly in sleep in many types of epilepsy, and in turn, seizures can cause disrupted sleep. Sudden unexplained death in epilepsy (SUDEP) is strongly associated with sleep. The biological mechanisms underlying this relationship between seizures and sleep are poorly understood, but if better delineated, could offer novel therapeutic approaches to treating both epilepsy and sleep disorders. In this presentation, I will explore this sleep-seizure relationship in mouse models of epilepsy. First, I will present general approaches for performing detailed longitudinal sleep and vigilance state analysis in mice, including pre-weanling neonatal mice. I will then discuss recent data from my laboratory demonstrating an abnormal sleep phenotype in a mouse model of the genetic epilepsy, tuberous sclerosis complex (TSC), and its relationship to seizures. The potential mechanistic basis of sleep abnormalities and sleep-seizure interactions in this TSC model will be investigated, focusing on the role of the mechanistic target of rapamycin (mTOR) pathway and hypothalamic orexin, with potential therapeutic applications of mTOR inhibitors and orexin antagonists. Finally, similar sleep-seizure interactions and mechanisms will be extended to models of acquired epilepsy due to status epilepticus-related brain injury.

SeminarNeuroscience

Adaptive Deep Brain Stimulation: Investigational System Development at the Edge of Clinical Brain Computer Interfacing

Jeffrey Herron
University of Washington
Dec 15, 2021

Over the last few decades, the use of deep brain stimulation (DBS) to improve the treatment of those with neurological movement disorders represents a critical success story in the development of invasive neurotechnology and the promise of brain-computer interfaces (BCI) to improve the lives of those suffering from incurable neurological disorders. In the last decade, investigational devices capable of recording and streaming neural activity from chronically implanted therapeutic electrodes has supercharged research into clinical applications of BCI, enabling in-human studies investigating the use of adaptive stimulation algorithms to further enhance therapeutic outcomes and improve future device performance. In this talk, Dr. Herron will review ongoing clinical research efforts in the field of adaptive DBS systems and algorithms. This will include an overview of DBS in current clinical practice, the development of bidirectional clinical-use research platforms, ongoing algorithm evaluation efforts, a discussion of current adoption barriers to be addressed in future work.

SeminarNeuroscienceRecording

Edge Computing using Spiking Neural Networks

Shirin Dora
Loughborough University
Nov 4, 2021

Deep learning has made tremendous progress in the last year but it's high computational and memory requirements impose challenges in using deep learning on edge devices. There has been some progress in lowering memory requirements of deep neural networks (for instance, use of half-precision) but there has been minimal effort in developing alternative efficient computational paradigms. Inspired by the brain, Spiking Neural Networks (SNN) provide an energy-efficient alternative to conventional rate-based neural networks. However, SNN architectures that employ the traditional feedforward and feedback pass do not fully exploit the asynchronous event-based processing paradigm of SNNs. In the first part of my talk, I will present my work on predictive coding which offers a fundamentally different approach to developing neural networks that are particularly suitable for event-based processing. In the second part of my talk, I will present our work on development of approaches for SNNs that target specific problems like low response latency and continual learning. References Dora, S., Bohte, S. M., & Pennartz, C. (2021). Deep Gated Hebbian Predictive Coding Accounts for Emergence of Complex Neural Response Properties Along the Visual Cortical Hierarchy. Frontiers in Computational Neuroscience, 65. Saranirad, V., McGinnity, T. M., Dora, S., & Coyle, D. (2021, July). DoB-SNN: A New Neuron Assembly-Inspired Spiking Neural Network for Pattern Classification. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp. 1-6). IEEE. Machingal, P., Thousif, M., Dora, S., Sundaram, S., Meng, Q. (2021). A Cross Entropy Loss for Spiking Neural Networks. Expert Systems with Applications (under review).

SeminarNeuroscience

Improving Communication With the Brain Through Electrode Technologies

Rylie Green
Imperial College London
Oct 27, 2021

Over the past 30 years bionic devices such as cochlear implants and pacemakers, have used a small number of metal electrodes to restore function and monitor activity in patients following disease or injury of excitable tissues. Growing interest in neurotechnologies, facilitated by ventures such as BrainGate, Neuralink and the European Human Brain Project, has increased public awareness of electrotherapeutics and led to both new applications for bioelectronics and a growing demand for less invasive devices with improved performance. Coupled with the rapid miniaturisation of electronic chips, bionic devices are now being developed to diagnose and treat a wide variety of neural and muscular disorders. Of particular interest is the area of high resolution devices that require smaller, more densely packed electrodes. Due to poor integration and communication with body tissue, conventional metallic electrodes cannot meet these size and spatial requirements. We have developed a range of polymer based electronic materials including conductive hydrogels (CHs), conductive elastomers (CEs) and living electrodes (LEs). These technologies provide synergy between low impedance charge transfer, reduced stiffness and an ability to be provide a biologically active interface. A range of electrode approaches are presented spanning wearables, implantables and drug delivery devices. This talk outlines the materials development and characterisation of both in vitro properties and translational in vivo performance. The challenges for translation and commercial uptake of novel technologies will also be discussed.

SeminarNeuroscienceRecording

In vitro bioelectronic models of the gut-brain axis

Róisín Owens
Department of Chemical Engineering and Biotechnology, University of Cambridge
Oct 18, 2021

The human gut microbiome has emerged as a key player in the bidirectional communication of the gut-brain axis, affecting various aspects of homeostasis and pathophysiology. Until recently, the majority of studies that seek to explore the mechanisms underlying the microbiome-gut-brain axis cross-talk relied almost exclusively on animal models, and particularly gnotobiotic mice. Despite the great progress made with these models, various limitations, including ethical considerations and interspecies differences that limit the translatability of data to human systems, pushed researchers to seek for alternatives. Over the past decades, the field of in vitro modelling of tissues has experienced tremendous growth, thanks to advances in 3D cell biology, materials, science and bioengineering, pushing further the borders of our ability to more faithfully emulate the in vivo situation. Organ-on-chip technology and bioengineered tissues have emerged as highly promising alternatives to animal models for a wide range of applications. In this talk I’ll discuss our progress towards generating a complete platform of the human microbiota-gut-brain axis with integrated monitoring and sensing capabilities. Bringing together principles of materials science, tissue engineering, 3D cell biology and bioelectronics, we are building advanced models of the GI and the BBB /NVU, with real-time and label-free monitoring units adapted in the model architecture, towards a robust and more physiologically relevant human in vitro model, aiming to i) elucidate the role of microbiota in the gut-brain axis communication, ii) to study how diet and impaired microbiota profiles affect various (patho-)physiologies, and iii) to test personalised medicine approaches for disease modelling and drug testing.

SeminarNeuroscienceRecording

Swarms for people

Sabine Hauert
University of Bristol
Oct 7, 2021

As tiny robots become individually more sophisticated, and larger robots easier to mass produce, a breakdown of conventional disciplinary silos is enabling swarm engineering to be adopted across scales and applications, from nanomedicine to treat cancer, to cm-sized robots for large-scale environmental monitoring or intralogistics. This convergence of capabilities is facilitating the transfer of lessons learned from one scale to the other. Cm-sized robots that work in the 1000s may operate in a way similar to reaction-diffusion systems at the nanoscale, while sophisticated microrobots may have individual capabilities that allow them to achieve swarm behaviour reminiscent of larger robots with memory, computation, and communication. Although the physics of these systems are fundamentally different, much of their emergent swarm behaviours can be abstracted to their ability to move and react to their local environment. This presents an opportunity to build a unified framework for the engineering of swarms across scales that makes use of machine learning to automatically discover suitable agent designs and behaviours, digital twins to seamlessly move between the digital and physical world, and user studies to explore how to make swarms safe and trustworthy. Such a framework would push the envelope of swarm capabilities, towards making swarms for people.

SeminarNeuroscienceRecording

Multisensory Integration: Development, Plasticity, and Translational Applications

Benjamin A. Rowland
Wake Forest School of Medicine
Sep 20, 2021
SeminarOpen SourceRecording

Introducing YAPiC: An Open Source tool for biologists to perform complex image segmentation with deep learning

Christoph Möhl
Core Research Facilities, German Center of Neurodegenerative Diseases (DZNE) Bonn.
Aug 26, 2021

Robust detection of biological structures such as neuronal dendrites in brightfield micrographs, tumor tissue in histological slides, or pathological brain regions in MRI scans is a fundamental task in bio-image analysis. Detection of those structures requests complex decision making which is often impossible with current image analysis software, and therefore typically executed by humans in a tedious and time-consuming manual procedure. Supervised pixel classification based on Deep Convolutional Neural Networks (DNNs) is currently emerging as the most promising technique to solve such complex region detection tasks. Here, a self-learning artificial neural network is trained with a small set of manually annotated images to eventually identify the trained structures from large image data sets in a fully automated way. While supervised pixel classification based on faster machine learning algorithms like Random Forests are nowadays part of the standard toolbox of bio-image analysts (e.g. Ilastik), the currently emerging tools based on deep learning are still rarely used. There is also not much experience in the community how much training data has to be collected, to obtain a reasonable prediction result with deep learning based approaches. Our software YAPiC (Yet Another Pixel Classifier) provides an easy-to-use Python- and command line interface and is purely designed for intuitive pixel classification of multidimensional images with DNNs. With the aim to integrate well in the current open source ecosystem, YAPiC utilizes the Ilastik user interface in combination with a high performance GPU server for model training and prediction. Numerous research groups at our institute have already successfully applied YAPiC for a variety of tasks. From our experience, a surprisingly low amount of sparse label data is needed to train a sufficiently working classifier for typical bioimaging applications. Not least because of this, YAPiC has become the "standard weapon” for our core facility to detect objects in hard-to-segement images. We would like to present some use cases like cell classification in high content screening, tissue detection in histological slides, quantification of neural outgrowth in phase contrast time series, or actin filament detection in transmission electron microscopy.

SeminarPhysics of Life

Picocalorimeter sensors for liquid samples with applications to chemical reactions and biochemistry

Jinhye Bae
UC San Diego
Jul 29, 2021
SeminarNeuroscience

The Challenge and Opportunities of Mapping Cortical Layer Activity and Connectivity with fMRI

Peter Bandettini
NIMH
Jul 8, 2021

In this talk I outline the technical challenges and current solutions to layer fMRI. Specifically, I describe our acquisition strategies for maximizing resolution, spatial coverage, time efficiency as well as, perhaps most importantly, vascular specificity. Novel applications from our group, including mapping feedforward and feedback connections to M1 during task and sensory input modulation and S1 during a sensory prediction task are be shown. Layer specific activity in dorsal lateral prefrontal cortex during a working memory task is also demonstrated. Additionally, I’ll show preliminary work on mapping whole brain layer-specific resting state connectivity and hierarchy.

ePoster

Identifying key structural connections from functional response data: theory & applications

COSYNE 2022

ePoster

Identifying key structural connections from functional response data: theory & applications

COSYNE 2022

ePoster

Computer vision and image processing applications on astrocyte-glioma interactions in 3D cell culture

Banu Erdem, Nilüfar Ismayilzada, Gökhan Bora Esmer, Emel Sokullu

FENS Forum 2024

ePoster

Extracellular vesicles and transmission of α-synuclein pathology: From cellular models to diagnostic applications

Diana Mjartinová, Karolína Albertusová, Miraj Ud Din Momand, Ľubica Fialová, Dominika Fričová

FENS Forum 2024

ePoster

Review of applications of graph theory and network neuroscience in the development of artificial neural networks

Jan Bendyk

Neuromatch 5