Hardware
hardware
Professor Peter Stone
Texas Robotics at the University of Texas at Austin invites applications for a tenure-track faculty position at the rank of Assistant Professor with a tenure home in the Mechanical Engineering department. Outstanding candidates in all areas of Robotics will be considered, with emphasis on novel hardware and control techniques. Successful candidates are expected to pursue an active research program, to teach both graduate and undergraduate courses, and to supervise students in research. The University is fully committed to building a world-class faculty and we welcome candidates who resonate with our core values of learning, discovery, freedom, leadership, individual opportunity, and responsibility. Candidates who are committed to broadening participation in robotics, at all levels, are strongly encouraged.
Open Hardware Microfluidics
What’s the point of having scientific and technological innovations when only a few can benefit from them? How can we make science more inclusive? Those questions are always in the back of my mind when we perform research in our laboratory, and we have a strong focus on the scientific accessibility of our developed methods from microfabrication to sensor development.
Trackoscope: A low-cost, open, autonomous tracking microscope for long-term observations of microscale organisms
Cells and microorganisms are motile, yet the stationary nature of conventional microscopes impedes comprehensive, long-term behavioral and biomechanical analysis. The limitations are twofold: a narrow focus permits high-resolution imaging but sacrifices the broader context of organism behavior, while a wider focus compromises microscopic detail. This trade-off is especially problematic when investigating rapidly motile ciliates, which often have to be confined to small volumes between coverslips affecting their natural behavior. To address this challenge, we introduce Trackoscope, an 2-axis autonomous tracking microscope designed to follow swimming organisms ranging from 10μm to 2mm across a 325 square centimeter area for extended durations—ranging from hours to days—at high resolution. Utilizing Trackoscope, we captured a diverse array of behaviors, from the air-water swimming locomotion of Amoeba to bacterial hunting dynamics in Actinosphaerium, walking gait in Tardigrada, and binary fission in motile Blepharisma. Trackoscope is a cost-effective solution well-suited for diverse settings, from high school labs to resource-constrained research environments. Its capability to capture diverse behaviors in larger, more realistic ecosystems extends our understanding of the physics of living systems. The low-cost, open architecture democratizes scientific discovery, offering a dynamic window into the lives of previously inaccessible small aquatic organisms.
A Breakdown of the Global Open Science Hardware (GOSH) Movement
This seminar, hosted by the LIBRE hub project, will provide an in-depth introduction to the Global Open Science Hardware (GOSH) movement. Since its inception, GOSH has been instrumental in advancing open-source hardware within scientific research, fostering a diverse and active community. The seminar will cover the history of GOSH, its current initiatives, and future opportunities, with a particular focus on the contributions and activities of the Latin American branch. This session aims to inform researchers, educators, and policy-makers about the significance and impact of GOSH in promoting accessibility and collaboration in science instrumentation.
Open source FPGA tools for building research devices
Edmund will present why to use FPGAs when building scientific instruments, when and why to use open source FPGA tools, the history of their development, their development status, currently supported FPGA families and functions, current developments in design languages and tools, the community, freely available design blocks, and possible future developments.
OpenSFDI: an open hardware project for label-free measurements of tissue optical properties with spatial frequency domain imaging
Spatial frequency domain imaging (SFDI) is a diffuse optical measurement technique that can quantify tissue optical absorption and reduced scattering on a pixel by-pixel basis. Measurements of absorption at different wavelengths enable the extraction of molar concentrations of tissue chromophores over a wide field, providing a noncontact and label-free means to assess tissue viability, oxygenation, microarchitecture, and molecular content. In this talk, I will describe openSFDI, an open-source guide for building a low-cost, small-footprint, multi-wavelength SFDI system capable of quantifying absorption and reduced scattering as well as oxyhemoglobin and deoxyhemoglobin concentrations in biological tissue. The openSFDI project has a companion website which provides a complete parts list along with detailed instructions for assembling the openSFDI system. I will also review several technological advances our lab has recently made, including the extension of SFDI to the shortwave infrared wavelength band (900-1300 nm), where water and lipids provide strong contrast. Finally, I will discuss several preclinical and clinical applications for SFDI, including applications related to cancer, dermatology, rheumatology, cardiovascular disease, and others.
Beyond Biologically Plausible Spiking Networks for Neuromorphic Computing
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features – event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST.
Memory-enriched computation and learning in spiking neural networks through Hebbian plasticity
Memory is a key component of biological neural systems that enables the retention of information over a huge range of temporal scales, ranging from hundreds of milliseconds up to years. While Hebbian plasticity is believed to play a pivotal role in biological memory, it has so far been analyzed mostly in the context of pattern completion and unsupervised learning. Here, we propose that Hebbian plasticity is fundamental for computations in biological neural systems. We introduce a novel spiking neural network (SNN) architecture that is enriched by Hebbian synaptic plasticity. We experimentally show that our memory-equipped SNN model outperforms state-of-the-art deep learning mechanisms in a sequential pattern-memorization task, as well as demonstrate superior out-of-distribution generalization capabilities compared to these models. We further show that our model can be successfully applied to one-shot learning and classification of handwritten characters, improving over the state-of-the-art SNN model. We also demonstrate the capability of our model to learn associations for audio to image synthesis from spoken and handwritten digits. Our SNN model further presents a novel solution to a variety of cognitive question answering tasks from a standard benchmark, achieving comparable performance to both memory-augmented ANN and SNN-based state-of-the-art solutions to this problem. Finally we demonstrate that our model is able to learn from rewards on an episodic reinforcement learning task and attain near-optimal strategy on a memory-based card game. Hence, our results show that Hebbian enrichment renders spiking neural networks surprisingly versatile in terms of their computational as well as learning capabilities. Since local Hebbian plasticity can easily be implemented in neuromorphic hardware, this also suggests that powerful cognitive neuromorphic systems can be build based on this principle.
Algorithm-Hardware Co-design for Efficient and Robust Spiking Neural Networks
General purpose event-based architectures for deep learning
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features -- event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST
Online Training of Spiking Recurrent Neural Networks With Memristive Synapses
Spiking recurrent neural networks (RNNs) are a promising tool for solving a wide variety of complex cognitive and motor tasks, due to their rich temporal dynamics and sparse processing. However training spiking RNNs on dedicated neuromorphic hardware is still an open challenge. This is due mainly to the lack of local, hardware-friendly learning mechanisms that can solve the temporal credit assignment problem and ensure stable network dynamics, even when the weight resolution is limited. These challenges are further accentuated, if one resorts to using memristive devices for in-memory computing to resolve the von-Neumann bottleneck problem, at the expense of a substantial increase in variability in both the computation and the working memory of the spiking RNNs. In this talk, I will present our recent work where we introduced a PyTorch simulation framework of memristive crossbar arrays that enables accurate investigation of such challenges. I will show that recently proposed e-prop learning rule can be used to train spiking RNNs whose weights are emulated in the presented simulation framework. Although e-prop locally approximates the ideal synaptic updates, it is difficult to implement the updates on the memristive substrate due to substantial device non-idealities. I will mention several widely adapted weight update schemes that primarily aim to cope with these device non-idealities and demonstrate that accumulating gradients can enable online and efficient training of spiking RNN on memristive substrates.
Canonical neural networks perform active inference
The free-energy principle and active inference have received a significant attention in the fields of neuroscience and machine learning. However, it remains to be established whether active inference is an apt explanation for any given neural network that actively exchanges with its environment. To address this issue, we show that a class of canonical neural networks of rate coding models implicitly performs variational Bayesian inference under a well-known form of partially observed Markov decision process model (Isomura, Shimazaki, Friston, Commun Biol, 2022). Based on the proposed theory, we demonstrate that canonical neural networks—featuring delayed modulation of Hebbian plasticity—can perform planning and adaptive behavioural control in the Bayes optimal manner, through postdiction of their previous decisions. This scheme enables us to estimate implicit priors under which the agent’s neural network operates and identify a specific form of the generative model. The proposed equivalence is crucial for rendering brain activity explainable to better understand basic neuropsychology and psychiatric disorders. Moreover, this notion can dramatically reduce the complexity of designing self-learning neuromorphic hardware to perform various types of tasks.
A Flexible Platform for Monitoring Cerebellum-Dependent Sensory Associative Learning
Climbing fiber inputs to Purkinje cells provide instructive signals critical for cerebellum-dependent associative learning. Studying these signals in head-fixed mice facilitates the use of imaging, electrophysiological, and optogenetic methods. Here, a low-cost behavioral platform (~$1000) was developed that allows tracking of associative learning in head-fixed mice that locomote freely on a running wheel. The platform incorporates two common associative learning paradigms: eyeblink conditioning and delayed tactile startle conditioning. Behavior is tracked using a camera and the wheel movement by a detector. We describe the components and setup and provide a detailed protocol for training and data analysis. This platform allows the incorporation of optogenetic stimulation and fluorescence imaging. The design allows a single host computer to control multiple platforms for training multiple animals simultaneously.
Measuring the Motions of Mice: Open source tracking with the KineMouse Wheel
Who says you can't reinvent the wheel?! This running wheel for head-fixed mice allows 3D reconstruction of body kinematics using a single camera and DeepLabCut (or similar) software. A lightweight, transparent polycarbonate floor and a mirror mounted on the inside allow two views to be captured simultaneously. All parts are commercially available or laser cut
Open-source neurotechnologies for imaging cortex-wide neural activity in behaving animals
Neural computations occurring simultaneously in multiple cerebral cortical regions are critical for mediating behaviors. Progress has been made in understanding how neural activity in specific cortical regions contributes to behavior. However, there is a lack of tools that allow simultaneous monitoring and perturbing neural activity from multiple cortical regions. We have engineered a suite of technologies to enable easy, robust access to much of the dorsal cortex of mice for optical and electrophysiological recordings. First, I will describe microsurgery robots that can programmed to perform delicate microsurgical procedures such as large bilateral craniotomies across the cortex and skull thinning in a semi-automated fashion. Next, I will describe digitally designed, morphologically realistic, transparent polymer skulls that allow long-term (+300 days) optical access. These polymer skulls allow mesoscopic imaging, as well as cellular and subcellular resolution two-photon imaging of neural structures up to 600 µm deep. We next engineered a widefield, miniaturized, head-mounted fluorescence microscope that is compatible with transparent polymer skull preparations. With a field of view of 8 × 10 mm2 and weighing less than 4 g, the ‘mini-mScope’ can image most of the mouse dorsal cortex with resolutions ranging from 39 to 56 µm. We used the mini-mScope to record mesoscale calcium activity across the dorsal cortex during sensory-evoked stimuli, open field behaviors, social interactions and transitions from wakefulness to sleep.
GeNN
Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. We will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it interacts with other Open Source frameworks such as Brian2GeNN and PyNN.
Building a Simple and Versatile Illumination System for Optogenetic Experiments
Controlling biological processes using light has increased the accuracy and speed with which researchers can manipulate many biological processes. Optical control allows for an unprecedented ability to dissect function and holds the potential for enabling novel genetic therapies. However, optogenetic experiments require adequate light sources with spatial, temporal, or intensity control, often a bottleneck for researchers. Here we detail how to build a low-cost and versatile LED illumination system that is easily customizable for different available optogenetic tools. This system is configurable for manual or computer control with adjustable LED intensity. We provide an illustrated step-by-step guide for building the circuit, making it computer-controlled, and constructing the LEDs. To facilitate the assembly of this device, we also discuss some basic soldering techniques and explain the circuitry used to control the LEDs. Using our open-source user interface, users can automate precise timing and pulsing of light on a personal computer (PC) or an inexpensive tablet. This automation makes the system useful for experiments that use LEDs to control genes, signaling pathways, and other cellular activities that span large time scales. For this protocol, no prior expertise in electronics is required to build all the parts needed or to use the illumination system to perform optogenetic experiments.
Deception, ExoNETs, SmushWare & Organic Data: Tech-facilitated neurorehabilitation & human-machine training
Making use of visual display technology and human-robotic interfaces, many researchers have illustrated various opportunities to distort visual and physical realities. We have had success with interventions such as error augmentation, sensory crossover, and negative viscosity. Judicial application of these techniques leads to training situations that enhance the learning process and can restore movement ability after neural injury. I will trace out clinical studies that have employed such technologies to improve the health and function, as well as share some leading-edge insights that include deceiving the patient, moving the "smarts" of software into the hardware, and examining clinical effectiveness
Maths, AI and Neuroscience meeting
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent. In this meeting we bring together experts from Mathematics, Artificial Intelligence and Neuroscience for a three day long hybrid meeting. We will have talks on mathematical tools in particular Topology to understand high dimensional data, explainable AI, how AI can help neuroscience and to what extent the brain may be using algorithms similar to the ones used in modern machine learning. Finally we will wrap up with a discussion on some aspects of neural hardware that may not have been considered in machine learning.
NMC4 Short Talk: What can 140,000 Reaches Tell Us About Demographic Contributions to Visuomotor Adaptation?
Motor learning is typically assessed in the lab, affording a high degree of control over the task environment. However, this level of control often comes at the cost of smaller sample sizes and a homogenous pool of participants (e.g. college students). To address this, we have designed a web-based motor learning experiment, making it possible to reach a larger, more diverse set of participants. As a proof-of-concept, we collected 1,581 participants completing a visuomotor rotation task, where participants controlled a visual cursor on the screen with their mouse and trackpad. Motor learning was indexed by how fast participants were able to compensate for a 45° rotation imposed between the cursor and their actual movement. Using a cross-validated LASSO regression, we found that motor learning varied significantly with the participant’s age and sex, and also strongly correlated with the location of the target, visual acuity, and satisfaction with the experiment. In contrast, participants' mouse and browser type were features eliminated by the model, indicating that motor performance was not influenced by variations in computer hardware and software. Together, this proof-of-concept study demonstrates how large datasets can generate important insights into the factors underlying motor learning.
NMC4 Short Talk: Rank similarity filters for computationally-efficient machine learning on high dimensional data
Real world datasets commonly contain nonlinearly separable classes, requiring nonlinear classifiers. However, these classifiers are less computationally efficient than their linear counterparts. This inefficiency wastes energy, resources and time. We were inspired by the efficiency of the brain to create a novel type of computationally efficient Artificial Neural Network (ANN) called Rank Similarity Filters. They can be used to both transform and classify nonlinearly separable datasets with many datapoints and dimensions. The weights of the filters are set using the rank orders of features in a datapoint, or optionally the 'confusion' adjusted ranks between features (determined from their distributions in the dataset). The activation strength of a filter determines its similarity to other points in the dataset, a measure based on cosine similarity. The activation of many Rank Similarity Filters transforms samples into a new nonlinear space suitable for linear classification (Rank Similarity Transform (RST)). We additionally used this method to create the nonlinear Rank Similarity Classifier (RSC), which is a fast and accurate multiclass classifier, and the nonlinear Rank Similarity Probabilistic Classifier (RSPC), which is an extension to the multilabel case. We evaluated the classifiers on multiple datasets and RSC is competitive with existing classifiers but with superior computational efficiency. Code for RST, RSC and RSPC is open source and was written in Python using the popular scikit-learn framework to make it easily accessible (https://github.com/KatharineShapcott/rank-similarity). In future extensions the algorithm can be applied to hardware suitable for the parallelization of an ANN (GPU) and a Spiking Neural Network (neuromorphic computing) with corresponding performance gains. This makes Rank Similarity Filters a promising biologically inspired solution to the problem of efficient analysis of nonlinearly separable data.
Norse: A library for gradient-based learning in Spiking Neural Networks
We introduce Norse: An open-source library for gradient-based training of spiking neural networks. In contrast to neuron simulators which mainly target computational neuroscientists, our library seamlessly integrates with the existing PyTorch ecosystem using abstractions familiar to the machine learning community. This has immediate benefits in that it provides a familiar interface, hardware accelerator support and, most importantly, the ability to use gradient-based optimization. While many parallel efforts in this direction exist, Norse emphasizes flexibility and usability in three ways. Users can conveniently specify feed-forward (convolutional) architectures, as well as arbitrarily connected recurrent networks. We strictly adhere to a functional and class-based API such that neuron primitives and, for example, plasticity rules composes. Finally, the functional core API ensures compatibility with the PyTorch JIT and ONNX infrastructure. We have made progress to support network execution on the SpiNNaker platform and plan to support other neuromorphic architectures in the future. While the library is useful in its present state, it also has limitations we will address in ongoing work. In particular, we aim to implement event-based gradient computation, using the EventProp algorithm, which will allow us to support sparse event-based data efficiently, as well as work towards support of more complex neuron models. With this library, we hope to contribute to a joint future of computational neuroscience and neuromorphic computing.
Efficient GPU training of SNNs using approximate RTRL
Last year’s SNUFA workshop report concluded “Moving toward neuron numbers comparable with biology and applying these networks to real-world data-sets will require the development of novel algorithms, software libraries, and dedicated hardware accelerators that perform well with the specifics of spiking neural networks” [1]. Taking inspiration from machine learning libraries — where techniques such as parallel batch training minimise latency and maximise GPU occupancy — as well as our previous research on efficiently simulating SNNs on GPUs for computational neuroscience [2,3], we are extending our GeNN SNN simulator to pursue this vision. To explore GeNN’s potential, we use the eProp learning rule [4] — which approximates RTRL — to train SNN classifiers on the Spiking Heidelberg Digits and the Spiking Sequential MNIST datasets. We find that the performance of these classifiers is comparable to those trained using BPTT [5] and verify that the theoretical advantages of neuron models with adaptation dynamics [5] translate to improved classification performance. We then measured execution times and found that training an SNN classifier using GeNN and eProp becomes faster than SpyTorch and BPTT after less than 685 timesteps and much larger models can be trained on the same GPU when using GeNN. Furthermore, we demonstrate that our implementation of parallel batch training improves training performance by over 4⨉ and enables near-perfect scaling across multiple GPUs. Finally, we show that performing inference using a recurrent SNN using GeNN uses less energy and has lower latency than a comparable LSTM simulated with TensorFlow [6].
Optimal initialization strategies for Deep Spiking Neural Networks
Recent advances in neuromorphic hardware and Surrogate Gradient (SG) learning highlight the potential of Spiking Neural Networks (SNNs) for energy-efficient signal processing and learning. Like in Artificial Neural Networks (ANNs), training performance in SNNs strongly depends on the initialization of synaptic and neuronal parameters. While there are established methods of initializing deep ANNs for high performance, effective strategies for optimal SNN initialization are lacking. Here, we address this gap and propose flexible data-dependent initialization strategies for SNNs.
Machine Learning with SNNs for low-power inference on neuromorphic hardware
The Open-Source UCLA Miniscope Project
The Miniscope Project -- an open-source collaborative effort—was created to accelerate innovation of miniature microscope technology and to increase global access to this technology. Currently, we are working on advancements ranging from optogenetic stimulation and wire-free operation to simultaneous optical and electrophysiological recording. Using these systems, we have uncovered mechanisms underlying temporal memory linking and investigated causes of cognitive deficits in temporal lobe epilepsy. Through innovation and optimization, this work aims to extend the reach of neuroscience research and create new avenues of scientific inquiry.
Autopilot v0.4.0 - Distributing development of a distributed experimental framework
Autopilot is a Python framework for performing complex behavioral neuroscience experiments by coordinating a swarm of Raspberry Pis. It was designed to not only give researchers a tool that allows them to perform the hardware-intensive experiments necessary for the next generation of naturalistic neuroscientific observation, but also to make it easier for scientists to be good stewards of the human knowledge project. Specifically, we designed Autopilot as a framework that lets its users contribute their technical expertise to a cumulative library of hardware interfaces and experimental designs, and produce data that is clean at the time of acquisition to lower barriers to open scientific practices. As autopilot matures, we have been progressively making these aspirations a reality. Currently we are preparing the release of Autopilot v0.4.0, which will include a new plugin system and wiki that makes use of semantic web technology to make a technical and contextual knowledge repository. By combining human readable text and semantic annotations in a wiki that makes contribution as easy as possible, we intend to make a communal knowledge system that gives a mechanism for sharing the contextual technical knowledge that is always excluded from methods sections, but is nonetheless necessary to perform cutting-edge experiments. By integrating it with Autopilot, we hope to make a first of its kind system that allows researchers to fluidly blend technical knowledge and open source hardware designs with the software necessary to use them. Reciprocally, we also hope that this system will support a kind of deep provenance that makes abstract "custom apparatus" statements in methods sections obsolete, allowing the scientific community to losslessly and effortlessly trace a dataset back to the code and hardware designs needed to replicate it. I will describe the basic architecture of Autopilot, recent work on its community contribution ecosystem, and the vision for the future of its development.
Creating and controlling visual environments using BonVision
Real-time rendering of closed-loop visual environments is important for next-generation understanding of brain function and behaviour, but is often prohibitively difficult for non-experts to implement and is limited to few laboratories worldwide. We developed BonVision as an easy-to-use open-source software for the display of virtual or augmented reality, as well as standard visual stimuli. BonVision has been tested on humans and mice, and is capable of supporting new experimental designs in other animal models of vision. As the architecture is based on the open-source Bonsai graphical programming language, BonVision benefits from native integration with experimental hardware. BonVision therefore enables easy implementation of closed-loop experiments, including real-time interaction with deep neural networks, and communication with behavioural and physiological measurement and manipulation devices.
Storythinking: Why Your Brain is Creative in Ways that Computer AI Can't Ever Be
Computer AI thinks differently from us, which is why it's such a useful tool. Thanks to the ingenuity of human programmers, AI's different method of thinking has made humans redundant at certain human tasks, such as chess. Yet there are mechanical limits to how far AI can replicate the products of human thinking. In this talk, we'll trace one such limit by exploring how AI and humans create differently. Humans create by reverse-engineering tools or behaviors to accomplish new actions. AI creates by mix-and-matching pieces of preexisting structures and labeling which combos are associated with positive and negative results. This different procedure is why AI cannot (and will never) learn to innovate technology or tactics and why it also cannot (and will never) learn to generate narratives (including novels, business plans, and scientific hypotheses). It also serves as a case study in why there's no reason to believe in "general intelligence" and why computer AI would have to partner with other mechanical forms of AI (run on non-computer hardware that, as of yet, does not exist, and would require humans to invent) for AI to take over the globe.
OpenFlexure
OpenFlexure is a 3D printed flexure translation stage, developed by a group at the Bath University. The stage is capable of sub-micron-scale motion, with very small drift over time. Which makes it quite good, among other things, for time-lapse protocols that need to be done over days/weeks time, and under space restricted areas, such as fume hoods.
Open-source tools for systems neuroscience
Open-source tools are gaining an increasing foothold in neuroscience. The rising complexity of experiments in systems neuroscience has led to a need for multiple parts of experiments to work together seamlessly. This means that open-source tools that freely interact with each other and can be understood and modified more easily allow scientists to conduct better experiments with less effort than closed tools. Open Ephys is an organization with team members distributed all around the world. Our mission is to advance our understanding of the brain by promoting community ownership of the tools we use to study it. We are making and distributing cutting edge tools that exploit modern technology to bring down the price and complexity of neuroscience experiments. A large component of this is to take tools that were developed in academic labs and helping with documentation, support, and distribution. More recently, we have been working on bringing high-quality manufacturing, distribution, warranty, and support to open source tools by partnering with OEPS in Portugal. We are now also establishing standards that make it possible to combine methods, such as miniaturized microscopes, electrode drive implants, and silicon probes seamlessly in one system. In the longer term, our development of new tools, interfaces and our standardization efforts have the goal of making it possible for scientists to easily run complex experiments that span from complex behaviors and tasks, multiple recording modalities, to easy access to data processing pipelines.
Feeding Exprementation Device ver3 (FED3)
FED3 is a device for behavioral training of mice in vivarium home-cages. Mice interact with FED3 through two nose-pokes and FED3 responds with visual stimuli, auditory stimuli, and by dispensing pellets. As it is used in the home-cage FED3 can be used for around-the-clock training of mice over several weeks. FED3 is open-source and can be built by users for ~10-20x less than commercial solutions for training mice. The control code is also open-source and was designed to be easily modified by users.
An open-source experimental framework for automation of cell biology experiments
Modern biological methods often require a large number of experiments to be conducted. For example, dissecting molecular pathways involved in a variety of biological processes in neurons and non-excitable cells requires high-throughput compound library or RNAi screens. Another example requiring large datasets - modern data analysis methods such as deep learning. These have been successfully applied to a number of biological and medical questions. In this talk we will describe an open-source platform allowing such experiments to be automated. The platform consists of an XY stage, perfusion system and an epifluorescent microscope with autofocusing. It is extremely easy to build and can be used for different experimental paradigms, ranging from immunolabeling and routine characterisation of large numbers of cell lines to high-throughput imaging of fluorescent reporters.
A discussion on the necessity for Open Source Hardware in neuroscience research
Research tools are paramount for scientific development, they enable researchers to observe and manipulate natural phenomena, learn their principles, make predictions and develop new technologies, treatments and improve living standards. Due to their costs and the geographical distribution of manufacturing companies access to them is not widely available, hindering the pace of research, the ability of many communities to contribute to science and education and reap its benefits. One possible solution for this issue is to create research tools under the open source ethos, where all documentation about them (including their designs, building and operating instructions) are made freely available. Dubbed Open Science Hardware (OSH), this production method follows the established and successful principles of open source software and brings many advantages over traditional creation methods such as: economic savings (see Pearce 2020 for potential economic savings in developing open source research tools), distributed manufacturing, repairability, and higher customizability. This development method has been greatly facilitated by recent technological developments in fast prototyping tools, Internet infrastructure, documentation platforms and lower costs of electronic off-the-shelf components. Taken together these benefits have the potential to make research more inclusive, equitable, distributed and most importantly, more reliable and reproducible, as - 1) researchers can know their tools inner workings in minute detail - 2) they can calibrate their tools before every experiment and having them running in optimal condition everytime - 3) given their lower price point, a)students can be trained/taught with hands on classes, b) several copies of the same instrument can be built leading to a parallelization of data collection and the creation of more robust datasets. - 4) Labs across the world can share the exact same type of instruments and create collaborative projects with standardized data collection and sharing.
Anatomical decision-making by cellular collectives: bioelectrical pattern memories, regeneration, and synthetic living organisms
A key question for basic biology and regenerative medicine concerns the way in which evolution exploits physics toward adaptive form and function. While genomes specify the molecular hardware of cells, what algorithms enable cellular collectives to reliably build specific, complex, target morphologies? Our lab studies the way in which all cells, not just neurons, communicate as electrical networks that enable scaling of single-cell properties into collective intelligences that solve problems in anatomical feature space. By learning to read, interpret, and write bioelectrical information in vivo, we have identified some novel controls of growth and form that enable incredible plasticity and robustness in anatomical homeostasis. In this talk, I will describe the fundamental knowledge gaps with respect to anatomical plasticity and pattern control beyond emergence, and discuss our efforts to understand large-scale morphological control circuits. I will show examples in embryogenesis, regeneration, cancer, and synthetic living machines. I will also discuss the implications of this work for not only regenerative medicine, but also for fundamental understanding of the origin of bodyplans and the relationship between genomes and functional anatomy.
Synthesizing Machine Intelligence in Neuromorphic Computers with Differentiable Programming
The potential of machine learning and deep learning to advance artificial intelligence is driving a quest to build dedicated computers, such as neuromorphic hardware that emulate the biological processes of the brain. While the hardware technologies already exist, their application to real-world tasks is hindered by the lack of suitable programming methods. Advances at the interface of neural computation and machine learning showed that key aspects of deep learning models and tools can be transferred to biologically plausible neural circuits. Building on these advances, I will show that differentiable programming can address many challenges of programming spiking neural networks for solving real-world tasks, and help devise novel continual and local learning algorithms. In turn, these new algorithms pave the road towards systematically synthesizing machine intelligence in neuromorphic hardware without detailed knowledge of the hardware circuits.
Effective and Efficient Computation with Multiple-timescale Spiking Recurrent Neural Networks
The emergence of brain-inspired neuromorphic computing as a paradigm for edge AI is motivating the search for high-performance and efficient spiking neural networks to run on this hardware. However, compared to classical neural networks in deep learning, current spiking neural networks lack competitive performance in compelling areas. Here, for sequential and streaming tasks, we demonstrate how spiking recurrent neural networks (SRNN) using adaptive spiking neurons are able to achieve state-of-the-art performance compared to other spiking neural networks and almost reach or exceed the performance of classical recurrent neural networks (RNNs) while exhibiting sparse activity. From this, we calculate a 100x energy improvement for our SRNNs over classical RNNs on the harder tasks. We find in particular that adapting the timescales of spiking neurons is crucial for achieving such performance, and we demonstrate the performance for SRNNs for different spiking neuron models.
Experiencias de fomento de la investigación en ciberpsicología y neurociencias con hardware abierto: El caso de Cybermind Lab en Perú
Panorama de tecnologías abiertas para ciencia y educación en América Latina
Open science hardware (OSH) as a concept usually refers to artifacts, but also to a practice, a discipline and a collective of people pushing for open access to the design of science tools. Since 2016, the Global Open Science Hardware (GOSH) movement gathers actors from academia, education, the private sector and civic organisations to advocate for OSH to be ubiquitous by 2025. In Latin America, GOSH advocates have fundraised and gathered around the development of annual "residencies" for building hardware for science and education. The community is currently defining its regional strategy and identifying other regional actors working on science and technology democratization. In this presentation I will give an overview of the open hardware movement for science, with a focus on the activities and strategy of the Latin American chapter and concrete ways to engage.
Computational models of neural development
Unlike even the most sophisticated current forms of artificial intelligence, developing biological organisms must build their neural hardware from scratch. Furthermore they must start to evade predators and find food before this construction process is complete. I will discuss an interdisciplinary program of mathematical and experimental work which addresses some of the computational principles underlying neural development. This includes (i) how growing axons navigate to their targets by detecting and responding to molecular cues in their environment, (ii) the formation of maps in the visual cortex and how these are influenced by visual experience, and (iii) how patterns of neural activity in the zebrafish brain develop to facilitate precisely targeted hunting behaviour. Together this work contributes to our understanding of both normal neural development and the etiology of neurodevelopmental disorders.
Open Neuroscience: Challenging scientific barriers with Open Source & Open Science tools
The Open Science movement advocates for more transparent, equitable and reliable science. It focusses on improving existing infrastructures and spans all aspects of the scientific process, from implementing systems that reward pre-registering studies and guarantee their publication, all the way to making research data citable and freely available. In this context, open source tools (and the development ethos supporting them) are becoming more and more present in academic labs, as researchers are realizing that they can improve the quality of their work, while cutting costs. In this talk an overview of OS tools for neuroscience will be given, with a focus on software and hardware, and how their use can bring scientific independence and make research evolve faster.
Biologically Realistic Computational Primitives of Neocortex Implemented on Neuromorphic Hardware Improve Vision Transformer Performance
COSYNE 2025