Data Analysis
data analysis
Latest
Dr. Demian Battaglia/Dr. Romain Goutagny
The postdoc position is under the joint co-mentoring of Dr. Demian Battaglia and Dr. Romain Goutagny at the University of Strasbourg, France, in the Functional System's Dynamics team – FunSy. The position starts as soon as possible and can last up to two years. The job offer is funded by the French ANR 'HippoComp' project, which focuses on the complexity of hippocampal oscillations and the hypothesis that such complexity can serve as a computational resource. The team performs electrophysiological recordings in the hippocampus and cortex during spatial navigation and memory tasks in mice (wild type and mutant developing various neuropathologies) and have access to vast data through local and international cooperation. They use a large spectrum of computational tools ranging from time-series and network analyses, information theory, and machine-learning to multi-scale computational modeling.
Prof. Maxime Baud/Dr. Timothée Proix
A postdoc position is available under the shared supervision of Prof. Maxime Baud and Dr. Timothée Proix, who both specialize in quantitative neuroscience research. Together, they are running a three-year clinical trial involving patients with epilepsy who received a minimally invasive EEG device beneath the scalp for the chronic recording (months) of brain signals during wake and sleep. The postdoc will help with the analysis of massive amounts of EEG data, with a desire to build forecasting algorithms aiming at estimating the risk of seizures 24 hours in advance. The project lies at the interface between machine learning and EEG data analysis. The goal of the project is to develop machine learning algorithms to forecast seizures.
Rune W. Berg
The lab of Rune W. Berg is looking for a highly motivated and dynamic researcher for a 3-year position to start January 1st, 2024. The topic is the neuroscience of motor control with a focus on locomotion and spinal circuitry and connections with the brain. The person will be performing the following: 1) experimental recording of neurons in the brain and spinal cord of awake behaving rats using Neuropixels and Neuronexus electrodes combined with optogenetics. 2) Analyze the large amount of data generated from these experiments, including tissue processing. 3) Participate in the development of the new theory of motor control.
Rune W. Berg
The lab of Rune W. Berg is looking for a highly motivated and dynamic researcher for a 3-year position to start January 1st, 2024. The topic is the neuroscience of motor control with a focus on locomotion and spinal circuitry and connections with the brain. The person will be performing the following: 1) experimental recording of neurons in the brain and spinal cord of awake behaving rats using Neuropixels and Neuronexus electrodes combined with optogenetics. 2) Analyze the large amount of data generated from these experiments, including tissue processing. 3) Participate in the development of the new theory of motor control.
Maximilian Riesenhuber, PhD
We have an opening for a postdoc position investigating the neural bases of deep multimodal learning in the brain. The project involves EEG and laminar 7T imaging (in collaboration with Dr. Peter Bandettini’s lab at NIMH) to test computational hypotheses for how the brain learns multimodal concept representations. Responsibilities of the postdoc include running EEG and fMRI experiments, data analysis and manuscript preparation. Georgetown University has a vibrant neuroscience community with over fifty labs participating in the Interdisciplinary Program in Neuroscience and a number of relevant research centers, including the new Center for Neuroengineering (cne.georgetown.edu). Interested candidates should submit a CV, a brief (1 page) statement of research interests, representative reprints, and the names and contact information of three references to Interfolio via https://apply.interfolio.com/148520. Faxed, emailed, or mailed applications will not be accepted. Questions about the position can be directed to Maximilian Riesenhuber (mr287@georgetown.edu).
N/A
The Grossman Center for Quantitative Biology and Human Behavior at the University of Chicago seeks outstanding applicants for multiple postdoctoral positions in computational and theoretical neuroscience. We especially welcome applicants who develop mathematical approaches, computational models, and machine learning methods to study the brain at the circuits, systems, or cognitive levels. The current faculty members of the Grossman Center to work with are: Brent Doiron’s lab investigates how the cellular and synaptic circuitry of neuronal circuits supports the complex dynamics and computations that are routinely observed in the brain. Jorge Jaramillo’s lab investigates how subcortical structures interact with cortical circuits to subserve cognitive processes such as memory, attention, and decision making. Ramon Nogueira’s lab investigates the geometry of representations as the computational support of cognitive processes like abstraction in noisy artificial and biological neural networks. Marcella Noorman’s lab investigates how properties of synapses, neurons, and circuits shape the neural dynamics that enable flexible and efficient computation. Samuel Muscinelli’s lab studies how the anatomy of brain circuits both governs learning and adapts to it. We combine analytical theory, machine learning, and data analysis, in close collaboration with experimentalists. Appointees will have access to state-of-the-art facilities and multiple opportunities for collaboration with exceptional experimental labs within the Neuroscience Institute, as well as other labs from the departments of Physics, Computer Sciences, and Statistics. The Grossman Center offers competitive postdoctoral salaries in the vibrant and international city of Chicago, and a rich intellectual environment that includes the Argonne National Laboratory and UChicago’s Data Science Institute. The Neuroscience Institute is currently engaged in a major expansion that includes the incorporation of several new faculty members in the next few years.
Lorenzo Fontolan
We are pleased to announce the opening of a PhD position at INMED (Aix-Marseille University) through the SCHADOC program, focused on the neural coding of social interactions and memory in the cortex of behaving mice. The project will investigate how social behaviors essential for cooperation, mating, and group dynamics are encoded in the brain, and how these processes are disrupted in neurodevelopmental disorders such as autism. This project uses longitudinal calcium imaging and population-level data analysis to study how cortical circuits encode social interactions in mice. Recordings from mPFC and S1 in wild-type and Neurod2 KO mice will be used to extract neural representations of social memory. The candidate will develop and apply computational models of neural dynamics and representational geometry to uncover how these codes evolve over time and are disrupted in social amnesia.
Neurobiological constraints on learning: bug or feature?
Understanding how brains learn requires bridging evidence across scales—from behaviour and neural circuits to cells, synapses, and molecules. In our work, we use computational modelling and data analysis to explore how the physical properties of neurons and neural circuits constrain learning. These include limits imposed by brain wiring, energy availability, molecular noise, and the 3D structure of dendritic spines. In this talk I will describe one such project testing if wiring motifs from fly brain connectomes can improve performance of reservoir computers, a type of recurrent neural network. The hope is that these insights into brain learning will lead to improved learning algorithms for artificial systems.
Sensory cognition
This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.
State-of-the-Art Spike Sorting with SpikeInterface
This webinar will focus on spike sorting analysis with SpikeInterface, an open-source framework for the analysis of extracellular electrophysiology data. After a brief introduction of the project (~30 mins) highlighting the basics of the SpikeInterface software and advanced features (e.g., data compression, quality metrics, drift correction, cloud visualization), we will have an extensive hands-on tutorial (~90 mins) showing how to use SpikeInterface in a real-world scenario. After attending the webinar, you will: (1) have a global overview of the different steps involved in a processing pipeline; (2) know how to write a complete analysis pipeline with SpikeInterface.
1.8 billion regressions to predict fMRI (journal club)
Public journal club where this week Mihir will present on the 1.8 billion regressions paper (https://www.biorxiv.org/content/10.1101/2022.03.28.485868v2), where the authors use hundreds of pretrained model embeddings to best predict fMRI activity.
Analyzing artificial neural networks to understand the brain
In the first part of this talk I will present work showing that recurrent neural networks can replicate broad behavioral patterns associated with dynamic visual object recognition in humans. An analysis of these networks shows that different types of recurrence use different strategies to solve the object recognition problem. The similarities between artificial neural networks and the brain presents another opportunity, beyond using them just as models of biological processing. In the second part of this talk, I will discuss—and solicit feedback on—a proposed research plan for testing a wide range of analysis tools frequently applied to neural data on artificial neural networks. I will present the motivation for this approach as well as the form the results could take and how this would benefit neuroscience.
Maths, AI and Neuroscience Meeting Stockholm
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent.
Toward an open science ecosystem for neuroimaging
It is now widely accepted that openness and transparency are keys to improving the reproducibility of scientific research, but many challenges remain to adoption of these practices. I will discuss the growth of an ecosystem for open science within the field of neuroimaging, focusing on platforms for open data sharing and open source tools for reproducible data analysis. I will also discuss the role of the Brain Imaging Data Structure (BIDS), a community standard for data organization, in enabling this open science ecosystem, and will outline the scientific impacts of these resources.
Experimental Neuroscience Bootcamp
This course provides a fundamental foundation in the modern techniques of experimental neuroscience. It introduces the essentials of sensors, motor control, microcontrollers, programming, data analysis, and machine learning by guiding students through the “hands on” construction of an increasingly capable robot. In parallel, related concepts in neuroscience are introduced as nature’s solution to the challenges students encounter while designing and building their own intelligent system.
Modern Approaches to Behavioural Analysis
The goal of neuroscience is to understand how the nervous system controls behaviour, not only in the simplified environments of the lab, but also in the natural environments for which nervous systems evolved. In pursuing this goal, neuroscience research is supported by an ever-larger toolbox, ranging from optogenetics to connectomics. However, often these tools are coupled with reductionist approaches for linking nervous systems and behaviour. This course will introduce advanced techniques for measuring and analysing behaviour, as well as three fundamental principles as necessary to understanding biological behaviour: (1) morphology and environment; (2) action-perception closed loops and purpose; and (3) individuality and historical contingencies [1]. [1] Gomez-Marin, A., & Ghazanfar, A. A. (2019). The life of behavior. Neuron, 104(1), 25-36
Pynapple: a light-weight python package for neural data analysis - webinar + tutorial
In systems neuroscience, datasets are multimodal and include data-streams of various origins: multichannel electrophysiology, 1- or 2-p calcium imaging, behavior, etc. Often, the exact nature of data streams are unique to each lab, if not each project. Analyzing these datasets in an efficient and open way is crucial for collaboration and reproducibility. In this combined webinar and tutorial, Adrien Peyrache and Guillaume Viejo will present Pynapple, a Python-based data analysis pipeline for systems neuroscience. Designed for flexibility and versatility, Pynapple allows users to perform cross-modal neural data analysis via a common programming approach which facilitates easy sharing of both analysis code and data.
Pynapple: a light-weight python package for neural data analysis - webinar + tutorial
In systems neuroscience, datasets are multimodal and include data-streams of various origins: multichannel electrophysiology, 1- or 2-p calcium imaging, behavior, etc. Often, the exact nature of data streams are unique to each lab, if not each project. Analyzing these datasets in an efficient and open way is crucial for collaboration and reproducibility. In this combined webinar and tutorial, Adrien Peyrache and Guillaume Viejo will present Pynapple, a Python-based data analysis pipeline for systems neuroscience. Designed for flexibility and versatility, Pynapple allows users to perform cross-modal neural data analysis via a common programming approach which facilitates easy sharing of both analysis code and data.
How neural circuits organize and learn during development
To generate brain circuits that are both flexible and stable requires the coordination of powerful developmental mechanisms acting at different scales, including activity-dependent synaptic plasticity and changes in single neuron properties. The brain prepares to efficiently compute information and reliably generate behavior during early development without any prior sensory experience but through patterned spontaneous activity. After the onset of sensory experience, ongoing activity continues to modify sensory circuits, and plays an important functional role in the mature brain. Using quantitative data analysis, experiment-driven theory and computational modeling, I will present a framework for how neural circuits are built and organized during early postnatal development into functional units, and how they are modified by intact and perturbed sensory-evoked activity. Inspired by experimental data from sensory cortex, I will then show how neural circuits use the resulting non-random connectivity to flexibly gate a network’s response, providing a mechanism for routing information.
Parametric control of flexible timing through low-dimensional neural manifolds
Biological brains possess an exceptional ability to infer relevant behavioral responses to a wide range of stimuli from only a few examples. This capacity to generalize beyond the training set has been proven particularly challenging to realize in artificial systems. How neural processes enable this capacity to extrapolate to novel stimuli is a fundamental open question. A prominent but underexplored hypothesis suggests that generalization is facilitated by a low-dimensional organization of collective neural activity, yet evidence for the underlying neural mechanisms remains wanting. Combining network modeling, theory and neural data analysis, we tested this hypothesis in the framework of flexible timing tasks, which rely on the interplay between inputs and recurrent dynamics. We first trained recurrent neural networks on a set of timing tasks while minimizing the dimensionality of neural activity by imposing low-rank constraints on the connectivity, and compared the performance and generalization capabilities with networks trained without any constraint. We then examined the trained networks, characterized the dynamical mechanisms underlying the computations, and verified their predictions in neural recordings. Our key finding is that low-dimensional dynamics strongly increases the ability to extrapolate to inputs outside of the range used in training. Critically, this capacity to generalize relies on controlling the low-dimensional dynamics by a parametric contextual input. We found that this parametric control of extrapolation was based on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds in activity space while preserving their geometry. Comparisons with neural recordings in the dorsomedial frontal cortex of macaque monkeys performing flexible timing tasks confirmed the geometric and dynamical signatures of this mechanism. Altogether, our results tie together a number of previous experimental findings and suggest that the low-dimensional organization of neural dynamics plays a central role in generalizable behaviors.
Maths, AI and Neuroscience meeting
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent. In this meeting we bring together experts from Mathematics, Artificial Intelligence and Neuroscience for a three day long hybrid meeting. We will have talks on mathematical tools in particular Topology to understand high dimensional data, explainable AI, how AI can help neuroscience and to what extent the brain may be using algorithms similar to the ones used in modern machine learning. Finally we will wrap up with a discussion on some aspects of neural hardware that may not have been considered in machine learning.
Neural Population Dynamics for Skilled Motor Control
The ability to reach, grasp, and manipulate objects is a remarkable expression of motor skill, and the loss of this ability in injury, stroke, or disease can be devastating. These behaviors are controlled by the coordinated activity of tens of millions of neurons distributed across many CNS regions, including the primary motor cortex. While many studies have characterized the activity of single cortical neurons during reaching, the principles governing the dynamics of large, distributed neural populations remain largely unknown. Recent work in primates has suggested that during the execution of reaching, motor cortex may autonomously generate the neural pattern controlling the movement, much like the spinal central pattern generator for locomotion. In this seminar, I will describe recent work that tests this hypothesis using large-scale neural recording, high-resolution behavioral measurements, dynamical systems approaches to data analysis, and optogenetic perturbations in mice. We find, by contrast, that motor cortex requires strong, continuous, and time-varying thalamic input to generate the neural pattern driving reaching. In a second line of work, we demonstrate that the cortico-cerebellar loop is not critical for driving the arm towards the target, but instead fine-tunes movement parameters to enable precise and accurate behavior. Finally, I will describe my future plans to apply these experimental and analytical approaches to the adaptive control of locomotion in complex environments.
Space wrapped onto a grid cell torus
Entorhinal grid cells, so-called because of their hexagonally tiled spatial receptive fields, are organized in modules which, collectively, are believed to form a population code for the animal’s position. Here, we apply topological data analysis to simultaneous recordings of hundreds of grid cells and show that joint activity of grid cells within a module lies on a toroidal manifold. Each position of the animal in its physical environment corresponds to a single location on the torus, and each grid cell is preferentially active within a single “field” on the torus. Toroidal firing positions persist between environments, and between wakefulness and sleep, in agreement with continuous attractor models of grid cells.
Learning the structure and investigating the geometry of complex networks
Networks are widely used as mathematical models of complex systems across many scientific disciplines, and in particular within neuroscience. In this talk, we introduce two aspects of our collaborative research: (1) machine learning and networks, and (2) graph dimensionality. Machine learning and networks. Decades of work have produced a vast corpus of research characterising the topological, combinatorial, statistical and spectral properties of graphs. Each graph property can be thought of as a feature that captures important (and sometimes overlapping) characteristics of a network. We have developed hcga, a framework for highly comparative analysis of graph data sets that computes several thousands of graph features from any given network. Taking inspiration from hctsa, hcga offers a suite of statistical learning and data analysis tools for automated identification and selection of important and interpretable features underpinning the characterisation of graph data sets. We show that hcga outperforms other methodologies (including deep learning) on supervised classification tasks on benchmark data sets whilst retaining the interpretability of network features, which we exemplify on a dataset of neuronal morphologies images. Graph dimensionality. Dimension is a fundamental property of objects and the space in which they are embedded. Yet ideal notions of dimension, as in Euclidean spaces, do not always translate to physical spaces, which can be constrained by boundaries and distorted by inhomogeneities, or to intrinsically discrete systems such as networks. Deviating from approaches based on fractals, here, we present a new framework to define intrinsic notions of dimension on networks, the relative, local and global dimension. We showcase our method on various physical systems.
Understanding neural dynamics in high dimensions across multiple timescales: from perception to motor control and learning
Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition. However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling. We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process. In particular we will discuss: (1) how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; (2) how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; (3) deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; (4) algorithmic approaches for simplifying deep network models of perception; (5) optimality approaches to explain cell-type diversity in the first steps of vision in the retina.
From genetics to neurobiology through transcriptomic data analysis
Over the past years, genetic studies have uncovered hundreds of genetic variants to be associated with complex brain disorders. While this really represents a big step forward in understanding the genetic etiology of brain disorders, the functional interpretation of these variants remains challenging. We aim to help with the functional characterization of variants through transcriptomic data analysis. For instance, we rely on brain transcriptome atlases, such as Allen Brain Atlases, to infer functional relations between genes. One example of this is the identification of signaling mechanisms of steroid receptors. Further, by integrating brain transcriptome atlases with neuropathology and neuroimaging data, we identify key genes and pathways associated with brain disorders (e.g. Parkinson's disease). With technological advances, we can now profile gene expression in single-cells at large scale. These developments have presented significant computational developments. Our lab focuses on developing scalable methods to identify cells in single-cell data through interactive visualization, scalable clustering, classification, and interpretable trajectory modelling. We also work on methods to integrate single-cell data across studies and technologies.
Mice alternate between discrete strategies during perceptual decision-making
Classical models of perceptual decision-making assume that animals use a single, consistent strategy to integrate sensory evidence and form decisions during an experiment. In this talk, I aim to convince you that this common view is incorrect. I will show results from applying a latent variable framework, the “GLM-HMM”, to hundreds of thousands of trials of mouse choice data. Our analysis reveals that mice don’t lapse. Instead, mice switch back and forth between engaged and disengaged behavior within a single session, and each mode of behavior lasts tens to hundreds of trials.
Reproducible EEG from raw data to publication figures
In this talk I will present recent developments in data sharing, organization, and analyses that allow to build fully reproducible workflows. First, I will present the Brain Imaging Data structure and discuss how this allows to build workflows, showing some new tools to read/import/create studies from EEG data structured that way. Second, I will present several newly developed tools for reproducible pre-processing and statistical analyses. Although it does take some extra effort, I will argue that it largely feasible to make most EEG data analysis fully reproducible.
Panel discussion: Practical advice for reproducibility in neuroscience
This virtual, interactive panel on reproducibility in neuroscience will focus on practical advice that researchers at all career stages could implement to improve the reproducibility of their work, from power analyses and pre-registering reports to selecting statistical tests and data sharing. The event will comprise introductions of our speakers and how they came to be advocates for reproducibility in science, followed by a 25-minute discussion on reproducibility, including practical advice for researchers on how to improve their data collection, analysis, and reporting, and then 25 minutes of audience Q&A. In total, the event will last one hour and 15 minutes. Afterwards, some of the speakers will join us for an informal chat and Q&A reserved only for students/postdocs.
Biomedical Image and Genetic Data Analysis with machine learning; applications in neurology and oncology
In this presentation I will show the opportunities and challenges of big data analytics with AI techniques in medical imaging, also in combination with genetic and clinical data. Both conventional machine learning techniques, such as radiomics for tumor characterization, and deep learning techniques for studying brain ageing and prognosis in dementia, will be addressed. Also the concept of deep imaging, a full integration of medical imaging and machine learning, will be discussed. Finally, I will address the challenges of how to successfully integrate these technologies in daily clinical workflow.
Theoretical and computational approaches to neuroscience with complex models in high dimensions across multiple timescales: from perception to motor control and learning
Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition. However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling. We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process. In particular we will discuss: how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; algorithmic approaches for simplifying deep network models of perception; optimality approaches to explain cell-type diversity in the first steps of vision in the retina.
Machine learning methods applied to dMRI tractography for the study of brain connectivity
Tractography datasets, calculated from dMRI, represent the main WM structural connections in the brain. Thanks to advances in image acquisition and processing, the complexity and size of these datasets have constantly increased, also containing a large amount of artifacts. We present some examples of algorithms, most of them based on classical machine learning approaches, to analyze these data and identify common connectivity patterns among subjects.
African Neuroscience: Current Status and Prospects
Understanding the function and dysfunction of the brain remains one of the key challenges of our time. However, an overwhelming majority of brain research is carried out in the Global North, by a minority of well-funded and intimately interconnected labs. In contrast, with an estimated one neuroscientist per million people in Africa, news about neuroscience research from the Global South remains sparse. Clearly, devising new policies to boost Africa’s neuroscience landscape is imperative. However, the policy must be based on accurate data, which is largely lacking. Such data must reflect the extreme heterogeneity of research outputs across the continent’s 54 countries. We have analysed all of Africa’s Neuroscience output over the past 21 years and uniquely verified the work performed in African laboratories. Our unique dataset allows us to gain accurate and in-depth information on the current state of African Neuroscience research, and to put it into a global context. The key findings from this work and recommendations on how African research might best be supported in the future will be discussed.
data analysis coverage
33 items