Tests
tests
Scaling Up Bioimaging with Microfluidic Chips
Explore how microfluidic chips can enhance your imaging experiments by increasing control, throughput, or flexibility. In this remote, personalized workshop, participants will receive expert guidance, support and chips to run tests on their own microscopes.
Influence of the context of administration in the antidepressant-like effects of the psychedelic 5-MeO-DMT
Psychedelics like psilocybin have shown rapid and long-lasting efficacy on depressive and anxiety symptoms. Other psychedelics with shorter half-lives, such as DMT and 5-MeO-DMT, have also shown promising preliminary outcomes in major depression, making them interesting candidates for clinical practice. Despite several promising clinical studies, the influence of the context on therapeutic responses or adverse effects remains poorly documented. To address this, we conducted preclinical studies evaluating the psychopharmacological profile of 5-MeO-DMT in contexts previously validated in mice as either pleasant (positive setting) or aversive (negative setting). Healthy C57BL/6J male mice received a single intraperitoneal (i.p.) injection of 5-MeO-DMT at doses of 0.5, 5, and 10 mg/kg, with assessments at 2 hours, 24 hours, and one week post-administration. In a corticosterone (CORT) mouse model of depression, 5-MeO-DMT was administered in different settings, and behavioral tests mimicking core symptoms of depression and anxiety were conducted. In CORT-exposed mice, an acute dose of 0.5 mg/kg administered in a neutral setting produced antidepressant-like effects at 24 hours, as observed by reduced immobility time in the Tail Suspension Test (TST). In a positive setting, the drug also reduced latency to first immobility and total immobility time in the TST. However, these beneficial effects were negated in a negative setting, where 5-MeO-DMT failed to produce antidepressant-like effects and instead elicited an anxiogenic response in the Elevated Plus Maze (EPM).Our results indicate a strong influence of setting on the psychopharmacological profile of 5-MeO-DMT. Future experiments will examine cortical markers of pre- and post-synaptic density to correlate neuroplasticity changes with the behavioral effects of 5-MeO-DMT in different settings.
Exploring Lifespan Memory Development and Intervention Strategies for Memory Decline through a Unified Model-Based Assessment
Understanding and potentially reversing memory decline necessitates a comprehensive examination of memory's evolution throughout life. Traditional memory assessments, however, suffer from a lack of comparability across different age groups due to the diverse nature of the tests employed. Addressing this gap, our study introduces a novel, ACT-R model-based memory assessment designed to provide a consistent metric for evaluating memory function across a lifespan, from 5 to 85-year-olds. This approach allows for direct comparison across various tasks and materials tailored to specific age groups. Our findings reveal a pronounced U-shaped trajectory of long-term memory function, with performance at age 5 mirroring those observed in elderly individuals with impairments, highlighting critical periods of memory development and decline. Leveraging this unified assessment method, we further investigate the therapeutic potential of rs-fMRI-guided TBS targeting area 8AV in individuals with early-onset Alzheimer’s Disease—a region implicated in memory deterioration and mood disturbances in this population. This research not only advances our understanding of memory's lifespan dynamics but also opens new avenues for targeted interventions in Alzheimer’s Disease, marking a significant step forward in the quest to mitigate memory decay.
How AI is advancing Clinical Neuropsychology and Cognitive Neuroscience
This talk aims to highlight the immense potential of Artificial Intelligence (AI) in advancing the field of psychology and cognitive neuroscience. Through the integration of machine learning algorithms, big data analytics, and neuroimaging techniques, AI has the potential to revolutionize the way we study human cognition and brain characteristics. In this talk, I will highlight our latest scientific advancements in utilizing AI to gain deeper insights into variations in cognitive performance across the lifespan and along the continuum from healthy to pathological functioning. The presentation will showcase cutting-edge examples of AI-driven applications, such as deep learning for automated scoring of neuropsychological tests, natural language processing to characeterize semantic coherence of patients with psychosis, and other application to diagnose and treat psychiatric and neurological disorders. Furthermore, the talk will address the challenges and ethical considerations associated with using AI in psychological research, such as data privacy, bias, and interpretability. Finally, the talk will discuss future directions and opportunities for further advancements in this dynamic field.
Computational models and experimental methods for the human cornea
The eye is a multi-component biological system, where mechanics, optics, transport phenomena and chemical reactions are strictly interlaced, characterized by the typical bio-variability in sizes and material properties. The eye’s response to external action is patient-specific and it can be predicted only by a customized approach, that accounts for the multiple physics and for the intrinsic microstructure of the tissues, developed with the aid of forefront means of computational biomechanics. Our activity in the last years has been devoted to the development of a comprehensive model of the cornea that aims at being entirely patient-specific. While the geometrical aspects are fully under control, given the sophisticated diagnostic machinery able to provide a fully three-dimensional images of the eye, the major difficulties are related to the characterization of the tissues, which require the setup of in-vivo tests to complement the well documented results of in-vitro tests. The interpretation of in-vivo tests is very complex, since the entire structure of the eye is involved and the characterization of the single tissue is not trivial. The availability of micromechanical models constructed from detailed images of the eye represents an important support for the characterization of the corneal tissues, especially in the case of pathologic conditions. In this presentation I will provide an overview of the research developed in our group in terms of computational models and experimental approaches developed for the human cornea.
Exploring the Potential of High-Density Data for Neuropsychological Testing with Coregraph
Coregraph is a tool under development that allows us to collect high-density data patterns during the administration of classic neuropsychological tests such as the Trail Making Test and Clock Drawing Test. These tests are widely used to evaluate cognitive function and screen for neurodegenerative disorders, but traditional methods of data collection only yield sparse information, such as test completion time or error types. By contrast, the high-density data collected with Coregraph may contribute to a better understanding of the cognitive processes involved in executing these tests. In addition, Coregraph may potentially revolutionize the field of cognitive evaluation by aiding in the prediction of cognitive deficits and in the identification of early signs of neurodegenerative disorders such as Alzheimer's dementia. By analyzing high-density graphomotor data through techniques like manual feature engineering and machine learning, we can uncover patterns and relationships that would be otherwise hidden with traditional methods of data analysis. We are currently in the process of determining the most effective methods of feature extraction and feature analysis to develop Coregraph to its full potential.
What's wrong with the prosopagnosia literature? A new approach to diagnosing and researching the condition
Developmental prosopagnosia is characterised by severe, lifelong difficulties when recognising facial identity. Most researchers require prosopagnosia cases exhibit ultra-conservative levels of impairment on the Cambridge Face Memory Test before they include them in their experiments. This results in the majority of people who believe that they have this condition being excluded from the scientific literature. In this talk I outline the many issues that will afflict prosopagnosia research if this continues, and show that these excluded cases do exhibit impairments on all commonly used diagnostic tests when a group-based method of assessment is utilised. I propose a paradigm shift away from cognitive task-based approaches to diagnosing prosopagnosia, and outline a new way that researchers can investigate this condition.
Protocols for the social transfer of pain and analgesia in mice
We provide protocols for the social transfer of pain and analgesia in mice. We describe the steps to induce pain or analgesia (pain relief) in bystander mice with a 1-h social interaction with a partner injected with CFA (complete Freund’s adjuvant) or CFA and morphine, respectively. We detail behavioral tests to assess pain or analgesia in the untreated bystander mice. This protocol has been validated in mice and rats and can be used for investigating mechanisms of empathy. Highlights • A protocol for the rapid social transfer of pain in rodents • Detailed requirements for handling and housing conditions • Procedures for habituation, social interaction, and pain induction and assessment • Adaptable for social transfer of analgesia and may be used to study empathy in rodents https://doi.org/10.1016/j.xpro.2022.101756
How do visual abilities relate to each other?
In vision, there is, surprisingly, very little evidence of common factors. Most studies have found only weak correlations between performance in different visual tests; meaning that, a participant performing better in one test is not more likely to perform also better in another test. Likewise in ageing, cross-sectional studies have repeatedly shown that older adults show deteriorated performance in most visual tests compared to young adults. However, within the older population, there is no evidence for a common factor underlying visual abilities. To investigate further the decline of visual abilities, we performed a longitudinal study with a battery of nine visual tasks three times, with two re-tests after about 4 and 7 years. Most visual abilities are rather stable across 7 years, but not visual acuity. I will discuss possible causes of these paradoxical outcomes.
Do we measure what we think we are measuring?
Tests used in the empirical sciences are often (implicitly) assumed to be representative of a target mechanism in the sense that similar tests should lead to similar results. In this talk, using resting-state electroencephalogram (EEG) as an example, I will argue that this assumption does not necessarily hold true. Typically EEG studies are conducted selecting one analysis method thought to be representative of the research question asked. Using multiple methods, we extracted a variety of features from a single resting-state EEG dataset and conducted correlational and case-control analyses. We found that many EEG features revealed a significant effect in the case-control analyses. Similarly, EEG features correlated significantly with cognitive tasks. However, when we compared these features pairwise, we did not find strong correlations. A number of explanations to these results will be discussed.
Assessing the potential for learning analogy problem-solving: does EF play a role?
Analogical reasoning is related to everyday learning and scholastic learning and is a robust predictor of g. Therefore, children's ability to reason by analogy is often measured in a school context to gain insight into children's cognitive and intellectual functioning. Often, the ability to reason by analogy is measured by means of conventional, static instruments. Static tests are criticised by researchers and practitioners to provide an overview of what individuals have learned in the past and for this reason are assumed not to tap into the potential for learning, based on Vygotsky's zone of proximal development. This seminar will focus on children's potential for reasoning by analogy, as measured by means of a dynamic test, which has a test-training-test design. In so doing, the potential relationship between dynamic test outcomes and executive functioning will be explored.
Multi-modal biomarkers improve prediction of memory function in cognitively unimpaired older adults
Identifying biomarkers that predict current and future cognition may improve estimates of Alzheimer’s disease risk among cognitively unimpaired older adults (CU). In vivo measures of amyloid and tau protein burden and task-based functional MRI measures of core memory mechanisms, such as the strength of cortical reinstatement during remembering, have each been linked to individual differences in memory in CU. This study assesses whether combining CSF biomarkers with fMRI indices of cortical reinstatement improves estimation of memory function in CU, assayed using three unique tests of hippocampal-dependent memory. Participants were 158 CU (90F, aged 60-88 years, CDR=0) enrolled in the Stanford Aging and Memory Study (SAMS). Cortical reinstatement was quantified using multivoxel pattern analysis of fMRI data collected during completion of a paired associate cued recall task. Memory was assayed by associative cued recall, a delayed recall composite, and a mnemonic discrimination task that involved discrimination between studied ‘target’ objects, novel ‘foil’ objects, and perceptually similar ‘lure’ objects. CSF Aβ42, Aβ40, and p-tau181 were measured with the automated Lumipulse G system (N=115). Regression analyses examined cross-sectional relationships between memory performance in each task and a) the strength of cortical reinstatement in the Default Network (comprised of posterior medial, medial frontal, and lateral parietal regions) during associative cued recall and b) CSF Aβ42/Aβ40 and p-tau181, controlling for age, sex, and education. For mnemonic discrimination, linear mixed effects models were used to examine the relationship between discrimination (d’) and each predictor as a function of target-lure similarity. Stronger cortical reinstatement was associated with better performance across all three memory assays. Age and higher CSF p-tau181 were each associated with poorer associative memory and a diminished improvement in mnemonic discrimination as target-lure similarity decreased. When combined in a single model, CSF p-tau181 and Default Network reinstatement strength, but not age, explained unique variance in associative memory and mnemonic discrimination performance, outperforming the single-modality models. Combining fMRI measures of core memory functions with protein biomarkers of Alzheimer’s disease significantly improved prediction of individual differences in memory performance in CU. Leveraging multimodal biomarkers may enhance future prediction of risk for cognitive decline.
Effects of pathological Tau on hippocampal neuronal activity and spatial memory in ageing mice
The gradual accumulation of hyperphosphorylated forms of the Tau protein (pTau) in the human brain correlate with cognitive dysfunction and neurodegeneration. I will present our recent findings on the consequences of human pTau aggregation in the hippocampal formation of a mouse tauopathy model. We show that pTau preferentially accumulates in deep-layer pyramidal neurons, leading to their neurodegeneration. In aged but not younger mice, pTau spreads to oligodendrocytes. During ‘goal-directed’ navigation, we detect fewer high-firing pyramidal cells, but coupling to network oscillations is maintained in the remaining cells. The firing patterns of individually recorded and labelled pyramidal and GABAergic neurons are similar in transgenic and non-transgenic mice, as are network oscillations, suggesting intact neuronal coordination. This is consistent with a lack of pTau in subcortical brain areas that provide rhythmic input to the cortex. Spatial memory tests reveal a reduction in short-term familiarity of spatial cues but unimpaired spatial working and reference memory. These results suggest that preserved subcortical network mechanisms compensate for the widespread pTau aggregation in the hippocampal formation. I will also briefly discuss ideas on the subcortical origins of spatial memory and the concept of the cortex as a monitoring device.
Developing a test to assess the ability of Zurich’s police cadets to discriminate, learn and recognize voices
The goal of this pilot study is to develop a test through which people with extraordinary voice recognition and discrimination skills can be found (for forensic purposes). Since interest in this field has emerged, three studies have been published with the goal of finding people with potential super-recognition skills in voice processing. One of them is a discrimination test and two are recognition tests, but neither combines the two test scenarios and their test designs cannot be directly compared to a casework scenario in forensics phonetics. The pilot study at hand attempts to bridge this gap and analyses if the skills of voice discrimination and recognition correlate. The study is guided by a practical, forensic application, which further complicates the process of creating a viable test. The participants for the pilot consist of different classes of police cadets, which means the test can be redone and adjusted over time.
Commonly used face cognition tests yield low reliability and inconsistent performance: Implications for test design, analysis, and interpretation of individual differences data
Unfamiliar face processing (face cognition) ability varies considerably in the general population. However, the means of its assessment are not standardised, and selected laboratory tests vary between studies. It is also unclear whether 1) the most commonly employed tests are reliable, 2) participants show a degree of consistency in their performance, 3) and the face cognition tests broadly measure one underlying ability, akin to general intelligence. In this study, we asked participants to perform eight tests frequently employed in the individual differences literature. We examined the reliability of these tests, relationships between them, consistency in participants’ performance, and used data driven approaches to determine factors underpinning performance. Overall, our findings suggest that the reliability of these tests is poor to moderate, the correlations between them are weak, the consistency in participant performance across tasks is low and that performance can be broadly split into two factors: telling faces together, and telling faces apart. We recommend that future studies adjust analyses to account for stimuli (face images) and participants as random factors, routinely assess reliability, and that newly developed tests of face cognition are examined in the context of convergent validity with other commonly used measures of face cognition ability.
NMC4 Short Talk: Maggot brain, mirror image? A statistical analysis of bilateral symmetry in an insect brain connectome
Neuroscientists have many questions about connectomes that revolve around the ability to compare networks. For example, comparing connectomes could help explain how neural wiring is related to individual differences, genetics, disease, development, or learning. One such question is that of bilateral symmetry: are the left and right sides of a connectome the same? Here, we investigate the bilateral symmetry of a recently presented connectome of an insect brain, the Drosophila larva. We approach this question from the perspective of two-sample testing for networks. First, we show how this question of “sameness” can be framed as a variety of different statistical hypotheses, each with different assumptions. Then, we describe test procedures for each of these hypotheses. We show how these different test procedures perform on both the observed connectome as well as a suite of synthetic perturbations to the connectome. We also point out that these tests require careful attention to parameter alignment and differences in network density in order to provide biologically meaningful results. Taken together, these results provide the first statistical characterization of bilateral symmetry for an entire brain at the single-neuron level, while also giving practical recommendations for future comparisons of connectome networks.
Computational Models of Fine-Detail and Categorical Information in Visual Working Memory: Unified or Separable Representations?
When we remember a stimulus we rarely maintain a full fidelity representation of the observed item. Our working memory instead maintains a mixture of the observed feature values and categorical/gist information. I will discuss evidence from computational models supporting a mix of categorical and fine-detail information in working memory. Having established the need for two memory formats in working memory, I will discuss whether categorical and fine-detailed information for a stimulus are represented separately or as a single unified representation. Computational models of these two potential cognitive structures make differing predictions about the pattern of responses in visual working memory recall tests. The present study required participants to remember the orientation of stimuli for later reproduction. The pattern of responses are used to test the competing representational structures and to quantify the relative amount of fine-detailed and categorical information maintained. The effects of set size, encoding time, serial order, and response order on memory precision, categorical information, and guessing rates are also explored. (This is a 60 min talk).
Consistency of Face Identity Processing: Basic & Translational Research
Previous work looking at individual differences in face identity processing (FIP) has found that most commonly used lab-based performance assessments are unfortunately not sufficiently sensitive on their own for measuring performance in both the upper and lower tails of the general population simultaneously. So more recently, researchers have begun incorporating multiple testing procedures into their assessments. Still, though, the growing consensus seems to be that at the individual level, there is quite a bit of variability between test scores. The overall consequence of this is that extreme scores will still occur simply by chance in large enough samples. To mitigate this issue, our recent work has developed measures of intra-individual FIP consistency to refine selection of those with superior abilities (i.e. from the upper tail). For starters, we assessed consistency of face matching and recognition in neurotypical controls, and compared them to a sample of SRs. In terms of face matching, we demonstrated psychophysically that SRs show significantly greater consistency than controls in exploiting spatial frequency information than controls. Meanwhile, we showed that SRs’ recognition of faces is highly related to memorability for identities, yet effectively unrelated among controls. So overall, at the high end of the FIP spectrum, consistency can be a useful tool for revealing both qualitative and quantitative individual differences. Finally, in conjunction with collaborators from the Rheinland-Pfalz Police, we developed a pair of bespoke work samples to get bias-free measures of intraindividual consistency in current law enforcement personnel. Officers with higher composite scores on a set of 3 challenging FIP tests tended to show higher consistency, and vice versa. Overall, this suggests that not only is consistency a reasonably good marker of superior FIP abilities, but could present important practical benefits for personnel selection in many other domains of expertise.
Neural Population Dynamics for Skilled Motor Control
The ability to reach, grasp, and manipulate objects is a remarkable expression of motor skill, and the loss of this ability in injury, stroke, or disease can be devastating. These behaviors are controlled by the coordinated activity of tens of millions of neurons distributed across many CNS regions, including the primary motor cortex. While many studies have characterized the activity of single cortical neurons during reaching, the principles governing the dynamics of large, distributed neural populations remain largely unknown. Recent work in primates has suggested that during the execution of reaching, motor cortex may autonomously generate the neural pattern controlling the movement, much like the spinal central pattern generator for locomotion. In this seminar, I will describe recent work that tests this hypothesis using large-scale neural recording, high-resolution behavioral measurements, dynamical systems approaches to data analysis, and optogenetic perturbations in mice. We find, by contrast, that motor cortex requires strong, continuous, and time-varying thalamic input to generate the neural pattern driving reaching. In a second line of work, we demonstrate that the cortico-cerebellar loop is not critical for driving the arm towards the target, but instead fine-tunes movement parameters to enable precise and accurate behavior. Finally, I will describe my future plans to apply these experimental and analytical approaches to the adaptive control of locomotion in complex environments.
Analyzing Retinal Disease Using Electron Microscopic Connectomics
John DowlingJohn E. Dowling received his AB and PhD from Harvard University. He taught in the Biology Department at Harvard from 1961 to 1964, first as an Instructor, then as assistant professor. In 1964 he moved to Johns Hopkins University, where he held an appointment as associate professor of Ophthalmology and Biophysics. He returned to Harvard as professor of Biology in 1971, was the Maria Moors Cabot Professor of Natural Sciences from 1971-2001, Harvard College professor from 1999-2004 and is presently the Gordon and Llura Gund Professor of Neurosciences. Dowling was chairman of the Biology Department at Harvard from 1975 to 1978 and served as associate dean of the faculty of Arts and Sciences from 1980 to 1984. He was Master of Leverett House at Harvard from 1981-1998 and currently serves as president of the Corporation of The Marine Biological Laboratory in Woods Hole. He is a Fellow of the American Academy of Arts and Sciences, a member of the National Academy of Sciences and a member of the American Philosophical Society. Awards that Dowling received include the Friedenwald Medal from the Association of Research in Ophthalmology and Vision in 1970, the Annual Award of the New England Ophthalmological Society in 1979, the Retinal Research Foundation Award for Retinal Research in 1981, an Alcon Vision Research Recognition Award in 1986, a National Eye Institute's MERIT award in 1987, the Von Sallman Prize in 1992, The Helen Keller Prize for Vision Research in 2000 and the Llura Ligget Gund Award for Lifetime Achievement and Recognition of Contribution to the Foundation Fighting Blindness in 2001. He was granted an honorary MD degree by the University of Lund (Sweden) in 1982 and an honorary Doctor of Laws degree from Dalhousie University (Canada) in 2012. Dowling's research interests have focused on the vertebrate retina as a model piece of the brain. He and his collaborators have long been interested in the functional organization of the retina, studying its synaptic organization, the electrical responses of the retinal neurons, and the mechanisms underlying neurotransmission and neuromodulation in the retina. Dowling became interested in zebrafish as a system in which one could explore the development and genetics of the vertebrate retina about 20 years ago. Part of his research team has focused on retinal development in zebrafish and the role of retinoic acid in early eye and photoreceptor development. A second group has developed behavioral tests to isolate mutations, both recessive and dominant, specific to the visual system.
A brain circuit for curiosity
Motivational drives are internal states that can be different even in similar interactions with external stimuli. Curiosity as the motivational drive for novelty-seeking and investigating the surrounding environment is for survival as essential and intrinsic as hunger. Curiosity, hunger, and appetitive aggression drive three different goal-directed behaviors—novelty seeking, food eating, and hunting— but these behaviors are composed of similar actions in animals. This similarity of actions has made it challenging to study novelty seeking and distinguish it from eating and hunting in nonarticulating animals. The brain mechanisms underlying this basic survival drive, curiosity, and novelty-seeking behavior have remained unclear. In spite of having well-developed techniques to study mouse brain circuits, there are many controversial and different results in the field of motivational behavior. This has left the functions of motivational brain regions such as the zona incerta (ZI) still uncertain. Not having a transparent, nonreinforced, and easily replicable paradigm is one of the main causes of this uncertainty. Therefore, we chose a simple solution to conduct our research: giving the mouse freedom to choose what it wants—double freeaccess choice. By examining mice in an experimental battery of object free-access double-choice (FADC) and social interaction tests—using optogenetics, chemogenetics, calcium fiber photometry, multichannel recording electrophysiology, and multicolor mRNA in situ hybridization—we uncovered a cell type–specific cortico-subcortical brain circuit of the curiosity and novelty-seeking behavior. We found in mice that inhibitory neurons in the medial ZI (ZIm) are essential for the decision to investigate an object or a conspecific. These neurons receive excitatory input from the prelimbic cortex to signal the initiation of exploration. This signal is modulated in the ZIm by the level of investigatory motivation. Increased activity in the ZIm instigates deep investigative action by inhibiting the periaqueductal gray region. A subpopulation of inhibitory ZIm neurons expressing tachykinin 1 (TAC1) modulates the investigatory behavior.
The Jena Voice Learning and Memory Test (JVLMT)
The ability to recognize someone’s voice spans a broad spectrum with phonagnosia on the low end and super recognition at the high end. Yet there is no standardized test to measure the individual ability to learn and recognize newly-learnt voices with samples of speech-like phonetic variability. We have developed the Jena Voice Learning and Memory Test (JVLMT), a 20 min-test based on item response theory and applicable across different languages. The JVLMT consists of three phases in which participants are familiarized with eight speakers in two stages and then perform a three-alternative forced choice recognition task, using pseudo sentences devoid of semantic content. Acoustic (dis)similarity analyses were used to create items with different levels of difficulty. Test scores are based on 22 Rasch-conform items. Items were selected and validated in online studies based on 232 and 454 participants, respectively. Mean accuracy is 0.51 with an SD of .18. The JVLMT showed high and moderate correlations with convergent validation tests (Bangor Voice Matching Test; Glasgow Voice Memory Test) and a weak correlation with a discriminant validation test (Digit Span). Empirical (marginal) reliability is 0.66. Four participants with super recognition (at least 2 SDs above the mean) and 7 participants with phonagnosia (at least 2 SDs below the mean) were identified. The JVLMT is a promising screen too for voice recognition abilities in a scientific and neuropsychological context.
Portable neuroscience: using devices and apps for diagnosis and treatment of neurological disease
Scientists work in laboratories; comfortable spaces which we equip and configure to be ideal for our needs. The scientific paradigm has been adopted by clinicians, who run diagnostic tests and treatments in fully equipped hospital facilities. Yet advances in technology mean that that increasingly many functions of a laboratory can be compressed into miniature devices, or even into a smartphone app. This has the potential to be transformative for healthcare in developing nations, allowing complex tests and interventions to be made available in every village. In this talk, I will give two examples of this approach from my recent work. In the field of stroke rehabilitation, I will present basic research which we have conducted in animals over the last decade. This reveals new ways to intervene and strengthen surviving pathways, which can be deployed in cheap electronic devices to enhance functional recovery. In degenerative disease, we have used Bayesian statistical methods to improve an algorithm to measure how rapidly a subject can stop an action. We then implemented this on a portable device and on a smartphone app. The measurement obtained can act as a useful screen for Parkinson’s Disease. I conclude with an outlook for the future of this approach, and an invitation to those who would be interesting in collaborating in rolling it out to in African settings.
The problem of power in single-case neuropsychology
Case-control comparisons are a gold standard method for diagnosing and researching neuropsychological deficits and dissociations at the single-case level. These statistical tests, developed by John Crawford and collaborators, provide quantitative criteria for the classical concepts of deficit, dissociation and double-dissociation. Much attention has been given to the control of Type I (false positive) errors for these tests, but far less to the avoidance of Type II (false negative) errors; that is, to statistical power. I will describe the origins and limits of statistical power for case-control comparisons, showing that there are hard upper limits on power, which have important implications for the design and interpretation of single-case studies. My aim is to stimulate discussion of the inferential status of single-case neuropsychological evidence, particularly with respect to contemporary ideals of open science and study preregistration.
Accuracy versus consistency: Investigating face and voice matching abilities
Deciding whether two different face photographs or voice samples are from the same person represent fundamental challenges within applied settings. To date, most research has focussed on average performance in these tests, failing to consider individual differences and within-person consistency in responses. In the current studies, participants completed the same face or voice matching test on two separate occasions, allowing comparison of overall accuracy across the two timepoints as well as consistency in trial-level responses. In both experiments, participants were highly consistent in their performances. In addition, we demonstrated a large association between consistency and accuracy, with the most accurate participants also tending to be the most consistent. This is an important result for applied settings in which organisational groups of super-matchers are deployed in real-world contexts. Being able to reliably identify these high performers based upon only a single test informs regarding recruitment for law enforcement agencies worldwide.
Algorithmic advances in face matching: Stability of tests in atypical groups
Face matching tests have traditionally been developed to assess human face perception in the neurotypical range, but methods that underlie their development often make it difficult for these measures to be applied in atypical populations (developmental prosopagnosics, super recognizers) due to unadjusted difficulty. We have recently presented the development of the Oxford Face Matching Test, a measure that bases individual item-difficulty on algorithmically derived similarity of presented stimuli. The measure seems useful as it can be given online or in-laboratory, has good discriminability and high test-retest reliability in the neurotypical groups. In addition, it has good validity in separating atypical groups at either of the spectrum ends. In this talk, I examine the stability of the OFMT and other traditionally used measures in atypical groups. On top of the theoretical significance of determining whether reliability of tests is equivalent in atypical population, this is an important question because of the practical concerns of retesting the same participants across different lab groups. Theoretical and practical implications for further test development and data sharing are discussed.
Panel discussion: Practical advice for reproducibility in neuroscience
This virtual, interactive panel on reproducibility in neuroscience will focus on practical advice that researchers at all career stages could implement to improve the reproducibility of their work, from power analyses and pre-registering reports to selecting statistical tests and data sharing. The event will comprise introductions of our speakers and how they came to be advocates for reproducibility in science, followed by a 25-minute discussion on reproducibility, including practical advice for researchers on how to improve their data collection, analysis, and reporting, and then 25 minutes of audience Q&A. In total, the event will last one hour and 15 minutes. Afterwards, some of the speakers will join us for an informal chat and Q&A reserved only for students/postdocs.
Lab-on-a-chip and diagnostic tools for COVID-19
The SARS-CoV-2 virus has rapidly evolved into a pandemic that is threatening public health, economics, and quality of life worldwide. The gold-standard for testing individuals for COVID-19 is using traditional RT-qPCR, which is expensive and can take up to several hours. Expanding surveillance across a global scale will call for new strategies and tests that are inexpensive, require minimal reagents, decrease assay time, and allow for simple point-of-care (POC) monitoring without need of trained personnel and with quick turnaround time. To expand the speed of COVID-19 surveillance, we are working on a point-of-care microfluidic chip to enable significantly faster and easier testing. This is based upon digital drop loop-mediated isothermal amplification that will allow for rapid testing of large populations at a reasonable cost. The device will employ a nucleic-acid based test called reverse transcriptase LAMP (RT- LAMP) that operates at a temperature of 60-65°C. RT-LAMP removes the bottleneck of thermal cycling and high temperatures required by traditional RT-qPCR thermocycling. The simplicity, speed, and sensitivity will enable early treatment and response to infection.
Super-Recognizers: facts, fallacies, and the future
Over the past decade, the domain of face identity processing has seen a surging interest in inter-individual differences, with a focus on individuals with superior skills, so-called Super-Recognizers (SRs; Ramon et al., 2019; Russell et al., 2009). Their study can provide valuable insights into brain-behavior relationships and advance our understanding of neural functioning. Despite a decade of research, and similarly to the field of developmental prosopagnosia, a consensus on diagnostic criteria for SR identification is lacking. Consequently, SRs are currently identified either inconsistently, via suboptimal individual tests, or via undocumented collections of tests. This state of the field has two major implications. Firstly, our scientific understanding of SRs will remain at best limited. Secondly, the needs of government agencies interested in deploying SRs for real-life identity verification (e.g., policing) are unlikely to be met. To counteract these issues, I suggest the following action points. Firstly, based on our and others’ work suggesting novel and challenging tests of face cognition (Bobak et al., 2019; Fysh et al., in press; Stacchi et al., 2019), and my collaborations with international security agencies, I recommend novel diagnostic criteria for SR identification. These are currently being used to screen the Berlin State Police’s >25K employees before identifying SRs via bespoke testing procedures we have collaboratively developed over the past years. Secondly, I introduce a cohort of SRs identified using these criteria, which is being studied in-depth using behavioral methods, psychophysics, eye-tracking, and neuroimaging. Finally, I suggest data acquired for these individuals should be curated to develop and share best practices with researchers and practitioners, and to gain an accurate and transparent description of SR cases to exploit their informative value.
How sleep remodels the brain
50 years ago it was found that sleep somehow made memories better and more permanent, but neither sleep nor memory researchers knew enough about sleep and memory to devise robust, effective tests. Today the fields of sleep and memory have grown and what is now understood is astounding. Still, great mysteries remain. What is the functional difference between the subtly different slow oscillation vs the slow wave of sleep and do they really have opposite memory consolidation effects? How do short spindles (e.g. <0.5 s as in schizophrenia) differ in function from longer ones and are longer spindles key to integrating new memories with old? Is the nesting of slow oscillations together with sleep spindles and hippocampal ripples necessary? What happens if all else is fine but the neurochemical environment is altered? Does sleep become maladaptive and “cement” memories into the hippocampal warehouse where they are assembled, together with all of their emotional baggage? Does maladaptive sleep underlie post-traumatic stress disorder and other stress-related disorders? How do we optimize sleep characteristics for top emotional and cognitive function? State of the art findings and current hypotheses will be presented.
Spanning the arc between optimality theories and data
Ideas about optimization are at the core of how we approach biological complexity. Quantitative predictions about biological systems have been successfully derived from first principles in the context of efficient coding, metabolic and transport networks, evolution, reinforcement learning, and decision making, by postulating that a system has evolved to optimize some utility function under biophysical constraints. Yet as normative theories become increasingly high-dimensional and optimal solutions stop being unique, it gets progressively hard to judge whether theoretical predictions are consistent with, or "close to", data. I will illustrate these issues using efficient coding applied to simple neuronal models as well as to a complex and realistic biochemical reaction network. As a solution, we developed a statistical framework which smoothly interpolates between ab initio optimality predictions and Bayesian parameter inference from data, while also permitting statistically rigorous tests of optimality hypotheses.
Antidepressant-like effect of curcumin in olfactory bulbectomized model of depression in male Wistar albino rats: Antidepressant behavior screening tests
FENS Forum 2024
Dopamine deficient mice show anergia but not anhedonia on tests of vigor-based choice
FENS Forum 2024