Performance Improvement
performance improvement
Brian2CUDA: Generating Efficient CUDA Code for Spiking Neural Networks
Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian’s CPU backend. Currently, Brian2CUDA is the only package that supports Brian’s full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.
Understanding and Enhancing Creative Analogical Reasoning
This talk will focus on our lab's extensive research on understanding and enhancing creative analogical reasoning. I will cover the development of the analogy finding matrix task, evidence for conscious augmentation of creative state during this task, and the real-world implications this ability has for college STEM education. I will also discuss recent research aimed at enhancing performance on this creative analogical reasoning task using both transcranial direct current stimulation (tDCS) and transcranial alternating current stimulation (tACS).
Analogies in motor learning - acquisition and refinement of movement skills
Analogies are widely used by teachers and coaches of different movement disciplines, serving a role during the learning phase of a new skill, and honing one’s performance to a competitive level. In previous studies, analogies improved motor control in various tasks and across age groups. Our study aimed to evaluate the efficacy of analogies throughout the learning process, using kinematic measures for an in-depth analysis. We tested whether applying analogies can shorten the motor learning process and induce insight and skill improvement in tasks that usually demand many hours of practice. The experiment included a drawing task, in which subjects were asked to connect four dots into a closed shape, and a mirror game, in which subjects tracked an oval that moved across the screen. After establishing a baseline, subjects were given an analogy, explicit instructions, or no further instruction. We compared their improvement in overall skill, accuracy, and speed. Subjects in the analogy and explicit groups improved their performance in the drawing task, while significant differences were found in the mirror game only for slow movements between analogy and controls. In conclusion, analogies are an important tool for teachers and coaches, and more research is needed to understand how to apply them for maximum results. They can rapidly change motor control and strategy but may also affect only some aspects of a movement and not others. Careful thought is needed to construct an effective analogy that encompasses relevant movement facets, as well as the practitioner’s personal background and experience.
Mechanisms of Perceptual Learning
Perceptual learning (PL) is defined as long-term performance improvement on a perceptual task as a result of perceptual experience (Sasaki, Nanez& Watanabe, 2011, Nat Rev Neurosci, 2011). We first found that PL occurs for task-irrelevant and subthreshold features and that pairing task-irrelevant features with rewards is the key to form task-irrelevant PL (TIPL) (Watanabe, Nanez & Sasaki, Nature, 2001; Watanabe et al, 2002, Nature Neuroscience; Seitz & Watanabe, Nature, 2003; Seitz, Kim & Watanabe, 2009, Neuron; Shibata et al, 2011, Science). These results suggest that PL occurs as a result of interactions between reinforcement and bottom-up stimulus signals (Seitz & Watanabe, 2005, TICS). On the other hand, fMRI study results indicate that lateral prefrontal cortex fails to detect and thus to suppress subthreshold task-irrelevant signals. This leads to the paradoxical effect that a signal that is below, but close to, one’s discrimination threshold ends up being stronger than suprathreshold signals (Tsushima, Sasaki & Watanabe, 2006, Science). We confirmed this mechanism with the following results: Task-irrelevant learning occurs only when a presented feature is under and close to the threshold with younger individuals (Tsushima et al, 2009, Current Biol), whereas with older individuals who tend to have less inhibitory control task-irrelevant learning occurs with a feature whose signal is much greater than the threshold (Chang et al, 2014, Current Biol). From all of these results, we conclude that attention and reward play important but different roles in PL. I will further discuss different stages and phases in mechanisms of PL (Seitz et al, 2005, PNAS; Yotsumoto, Watanabe & Sasaki, Neuron, 2008; Yotsumoto et al, Curr Biol, 2009; Watanabe & Sasaki, 2015, Ann Rev Psychol; Shibata et al, 2017, Nat Neurosci; Tamaki et al, 2020, Nat Neurosci).
Effective and Efficient Computation with Multiple-timescale Spiking Recurrent Neural Networks
The emergence of brain-inspired neuromorphic computing as a paradigm for edge AI is motivating the search for high-performance and efficient spiking neural networks to run on this hardware. However, compared to classical neural networks in deep learning, current spiking neural networks lack competitive performance in compelling areas. Here, for sequential and streaming tasks, we demonstrate how spiking recurrent neural networks (SRNN) using adaptive spiking neurons are able to achieve state-of-the-art performance compared to other spiking neural networks and almost reach or exceed the performance of classical recurrent neural networks (RNNs) while exhibiting sparse activity. From this, we calculate a 100x energy improvement for our SRNNs over classical RNNs on the harder tasks. We find in particular that adapting the timescales of spiking neurons is crucial for achieving such performance, and we demonstrate the performance for SRNNs for different spiking neuron models.