Modelling
Modelling
VIB Center for Computational Biology and AI
The VIB Center for AI & Computational Biology (VIB.AI) and KU Leuven are searching for three Principal Investigators to join their faculty (Group Leader at VIB, Professor at KU Leuven). We are particularly interested in recruiting faculty members who use and develop artificial intelligence methods and mechanistic mathematical models to address fundamental questions in biology. We welcome applications across all domains of machine learning, artificial intelligence, and associated fields. Examples of research topics include but are not limited to: development of new AI architectures for biology and hybrid models that combine deep learning with mechanistic models; foundation models of genome regulation using single-cell and spatial multi-omics data; AI-based modeling of protein structure and protein interaction networks; AI-based modeling of cell morphology and tissue function using imaging and computer vision; AI models of disease and digital twin applications. Biological applications are broad: microorganisms, plant biology, biodiversity and ecology, neuroscience, cancer, and immunology. We also welcome applicants with applied projects, including synthetic biology, AI-driven experiments (experiment-in-the-loop), and bio-engineering. This position comes with full salary and core funding (internationally competitive package) that is renewable for multiple additional 5-year periods, and access to state-of-the-art research and top-notch support core facilities, as well as support to attract talented PhD students and postdocs from across the world. Assignment Research. As a VIB Group Leader and KU Leuven Professor, you will be expected to (continue to) build your research program with your own independent research group, and to set up or consolidate a strong network with other researchers within VIB, KU Leuven and beyond. You strive for excellence in your research and thereby contribute to the scientific development of the new VIB.AI center. Teaching. The candidate will be appointed at the Faculty of Medicine or the Faculty of Engineering Science, and will take up teaching assignments (in English or in Dutch) in either of these faculties. You ensure high-quality education, with a clear commitment to the quality of the program as a whole. You also contribute to the pedagogic project of the university through the supervision of MSc theses and as a promoter of PhD students. Service. You provide scientific, societal and internal services that contribute to the reputation of the entire VIB and university. Your profile -PhD or equivalent experience in machine learning or a related quantitative field (Computer Science, Artificial Intelligence, Statistics, Mathematics, Physics, Computational Biology/Chemistry). -Candidates will be considered at junior or more senior levels assuming relevant experience in foundational machine learning research and/or applied machine learning in an academic and/or industrial setting (post-doctoral). -Excellent machine learning and programming experience. -Published impactful research, demonstrates creativity, originality and addresses relevant problems in biology & computational research. -Demonstrated ability to acquire competitive funding. -Interdisciplinary mindset and keen on collaborating broadly in the center, the department and the university. -Motivated to guide postdoctoral researchers, PhD interns, and full-time scientists. -International working experience. -You have a thorough knowledge of spoken and written English. -The official administrative language used at KU Leuven is Dutch. If you do not speak Dutch (or do not speak it well) at the start of employment, KU Leuven will provide language training to enable you to take part in meetings. Before teaching courses in Dutch or English, you will be given the opportunity to learn Dutch resp. English to the required standard. We offer -Substantial core research funding that is renewable every 5 years. -An attractive employment package including 100% of your salary and excellent health benefits. -Access to a vibrant academic environment that encourages collaboration with top experts in biology, computational biology, engineering and computer science both at VIB and KUL. -Access to a highly talented pool of students in biology, computational biology, engineering and computer science from the KUL bachelor and master programs. -New open-design state-of-the-art research space. -Access to computing cluster infrastructure at the VIB Data Core and Flemish Supercomputer Center. -Access to excellent and staffed core facilities at the Center, at VIB, and at KU Leuven (including sequencing, proteomics, single-cell, microscopy, data core, and many more). -Possibility to also perform wet lab activities in state-of-the-art infrastructure. -Broad administrative support, including help recruiting technicians, PhD students and post-doc scientists for your group. -Access to dedicated business development team specialized in technology transfer and valorization. -Professional leadership training. -An internationally recognized workplace that values diversity, promotes an inclusive environment. -Help with relocation and establishing a life in Belgium, including visa application (if necessary), and finding housing, schooling and daycare. -The successful candidate should also be selected for Professor at KU Leuven. About the VIB Center for AI and Computational Biology VIB (Flanders Institute for Biotechnology) is an entrepreneurial non-profit research institute, with a clear focus on ground-breaking strategic basic research in life sciences and operates in close partnership with the five universities in Flanders. VIB strives for a respectful and supportive working environment and a culture of belonging for diverse talents in the organization. VIB.AI was established in 2023 as the 10th VIB center, with the core mission to study fundamental problems in biology by combining machine learning with in-depth knowledge of biological processes. We aim to work towards foundation models and integrative theories of biological systems, and towards innovative AI-driven biotech applications in synthetic biology, agro-tech, and personalized medicine. AI-driven research at VIB.AI starts from biological questions and challenges that are addressed using state-of-the-art and novel computational and AI strategies, through close interactions and iterations with biological experiments and research labs within VIB.AI and across VIB. Additionally, we are committed to fostering computational and AI research excellence across all VIB Centers, amplifying collaboration (12 co-associated GLs) and pushing the boundaries of innovation in all of VIB’s research domains (plant biology, cancer biology, structural biology, medical biotechnology, neuroscience, immunology and microbiology). KU Leuven This position is linked to a professor position at KU Leuven. Based on the profile and research topic, this position is linked to a KU Leuven Department, namely DME, DCMM or ESAT/CS, with one position available per Department(s). Department of Human Genetics (DME) The DME Department of Human Genetics is a leading European center for Human Genetics. Its primary objectives revolve around achieving excellence in research, education, and translational applications. These efforts aim to enhance genetic diagnosis, counselling, therapy, and preventive measures. The department's research portfolio encompasses both fundamental and clinical research conducted across various domains, including cultured cells, animal models, and human subjects. The research focus of DME is genome structure, function, and development using state of the art genomics and bioinformatics methodologies and deploying them for the diagnosis and treatment of genetic disorders. Department of Cellular and Molecular Medicine (DCMM) The research focus of DCMM is the exploration of molecular mechanisms of disease: basic cellular and molecular processes and their (path)physiological effects, and their implications in various human diseases. DCMM combines expertise in techniques of biochemistry, electrophysiology, molecular biology, cell imaging, proteomics, bio-informatics and animal-model development to acquire novel insights into cellular signalling and communication processes. Major areas of research include signalling by ions, lipids and protein phosphorylation, chromatin structure and function, protein structure, (mis)folding and transport and cell metabolism, death, autophagy and differentiation. Department of Electrical Engineering (ESAT) ESAT works on several technological innovations in the fields of energy, integrated circuits, information processing, image & speech processing, and telecommunication systems. ESAT has seven distinct groups: Computer Security and Industrial Cryptography (COSIC), Electrical Energy Systems and Applications (ELECTRA), Electronic Circuits and Systems (MICAS), Micro- and Nanosystems (MNS), Processing Speech and Images and Center for Dynamical Systems (PSI), Signal Processing and Data Analytics (STADIUS) and Waves: Core Research and Engineering (WAVECORE). Department of Computer Science (CS) The Department of Computer Science is globally recognized for its exceptional research and academic programs in fields such as informatics, computer science, artificial intelligence, mathematical engineering, digital humanities, and teacher education. The CS department is comprised of five distinct units: Distributed and Secure Software (DistriNet), Declarative Languages and Artificial Intelligence (DTAI), Human-Computer Interaction (HCI), Numerical Analysis and Applied Mathematics (NUMA) and Computer Science. The Leuven Life Sciences eco-system and city of Leuven Leuven and the surrounding area of Northern Belgium (Flanders) represent one of the top research destinations in Europe. VIB.AI is part of the VIB Life Sciences Institute (with colleagues in cancer, biotechnology, immunology, plants, microbiology amongst others) and an extensive set of core facilities. KUL is the largest university in Belgium and Europe’s most innovative university and is home to the Leuven AI Institute and the Leuven University Hospital, one of the largest in Europe. Leuven is also home to Imec, a world-renowned research center for nanoelectronics and digital technology. Leuven is an attractive European university city with a rich history and a lively atmosphere. The area has a strong biotechnology sector with a wide variety of spin-offs and start-ups. This, together with the presence of the University of Leuven and the University Hospital, make Leuven particularly internationally-oriented and tech-minded, and a natural home for researchers and their families; the city was even awarded the title of European capital of innovation 2020 by the European Commission. English is very widely spoken in the city and surrounding area. Leuven offers an affordable, high standard of living, has an international school, and ample daycare options. The public education system and public health care system in Flanders are world-class, easily accessible, and low-cost to end users. Public transport is excellent and widely available. Brussels, the capital of Europe, is only 20 mins away. Leuven is also only 14 minutes by train from Brussels Airport which has many daily direct flights to North America, Africa and Asia. There are also high-speed direct international rail connections to numerous cities including Paris, London, Frankfurt and Amsterdam. Start date: 2024 How to apply? Please use the VIB HR application tool and upload -a cover/motivation letter -your full CV with publication list -a 2-page biosketch including your top 5 publications or achievements -contact details of 3 referees -a 2-4 page statement of your research plan including a brief statement reflecting your vision for the new VIB.AI center Application deadline: 31st January 2024 For more information Contact Stein Aerts (stein.aerts@vib.be) director VIB.AI
Prof. Ileana Hanganu-Opatz
Our group investigates the synchronized patterns of electrical activity in the immature brain, their relevance for development of cognitive and (multi)sensory abilities and their impairment associated with neuropsychiatric disorders (www.opatzlab.com). The multidisciplinary project is funded by an EU Grant and encompasses in vivo electrophysiology and optogenetics, behavioral and pharmacological investigation. We offer state-of-the-art facilities and a stimulating scientific environment in a dynamic, young and interdisciplinary team. We guarantee extensive and individual training.
Thomas Euler
Visual processing starts in the retina, where at least 40 distinct features are extracted and sent through parallel channels to higher visual centers in the brain. One of the biggest remaining challenges in retinal research is to understand how these diverse representations arise within the retinal circuits. The origin of this vast functional diversity lies in the retina’s second synaptic layer, the inner plexiform layer, where bipolar cells, amacrine cells and ganglion cells form complex interconnected networks. In particular the amacrine cells are crucial for decorrelating different functional channels: They tune the ganglion cells’ responses, which represent the retina’s output, by modulating glutamate release from bipolar cells as well as heavily shaping the signal integration in the ganglion cell dendritic arbors. Still surprisingly little is known about the great majority of the 60+ genetic types of amacrine cells and their intricate networks in the inner retina. In this project, we aim to dissect the functional roles of different amacrine cell circuits for image processing. To this end, we will combine functional 2-photon imaging of excitatory and inhibitory signals in the mouse retina with computational modeling based on connectomics data from electron microscopy.
Prof Laura Busse
2 PhD positions as part of interdisciplinary collaborations are available in Laura Busse’s group at the Department of Biology II of the LMU Munich. We study neural circuits of visual perception in awake, behaving mice, where we combine extracellular electrophysiological recordings with genetic tools for circuit manipulation. The first position is part of the DFG-funded Collaborative Research Center Robust vision: Inference Principles and neural mechanisms. In collaboration with Philipp Berens (data analysis, University of Tübingen) and Thomas Euler (retinal imaging, University of Tübingen), the project builds upon Roman Roson*, Bauer* et al. (2019), and will investigate how feedforward, feedback, and neuromodulatory inputs dorsolateral geniculate nucleus (dLGN) of the thalamus shape visual representations. The project will include opportunities for in vivo extracellular recordings in mouse dLGN, optogenetic manipulations of cortico-thalamic feedback, and advanced modeling approaches (in Philipp Berens’ lab). A second, complementary PhD position based primarily in Tübingen will have a computational focus and will focus on modeling of the experimental findings. The second position is part of the DFG-funded Priority Program Computational Connectomics and will be done in collaboration with Dr. Tatjana Tchumatchenko at the University of Bonn and Max Planck Institute for Brain Research in Frankfurt. The project combines questions from neurobiology and theoretical neuroscience. It will exploit simultaneous thalamic / cortical recordings and optogenetic manipulations to investigate how feedforward inputs and recurrent connectivity in the thalamocortical loop shapes population activity in the primary visual cortex. The successful candidate will perform extracellular recordings and optogenetics in mice, use quantitative data analysis and collaborate with our theory partner in Bonn/Frankfurt on theoretical network analyses. Interested candidates are welcome to establish contact via email to busse@bio.lmu.de. Both positions offer a thriving scientific environment, a structured PhD program and numerous opportunities for networking and exchange. Applications will need to go through the LMU Graduate School of Systemic Neuroscience (GSN online application, https://www.gsn.uni-muenchen.de). The deadline for applications is February 15.
Georgios N. Yannakakis
Join our AI research group at the Institute of Digital Games - University of Malta. We have a number of research posts (research associates, PhD students and Postdoctoral fellows) open currently. Be part of a research team that builds the next generation AI algorithms that play, feel and design games. We are looking for excellent candidates with a good grasp of as many of the following areas as possible: deep/shallow learning, affect annotation and modelling, human-computer interaction, computer vision, behaviour cloning, procedural content generation, generative systems.
Dr Andrej Bicanski
This project involves modelling the staggered development and the decline with age of spatial coding in the mammalian brain, as well as data analysis of single neuron recordings. The position is based at Newcastle University, UK, with a rotation in the lab of Prof. Colin Lever in Durham, UK. The project is fully funded for 4 years by the BBSRC. Both international and UK students can apply, and fees are covered.
Thomas Nowotny
You will develop novel active AI algorithms that are inspired by the rapid and robust learning of insects within the £1.2m EPSRC International Centre to Centre Collaboration project: “ActiveAI: active learning and selective attention for rapid, robust and efficient AI.” and will work in collaboration with the University of Sheffield and world-leading neuroscientists in Australia. Your primary role will be to develop a new class of ActiveAI controllers for problems in which insects excel but deep learning methods struggle. These problems have one or more of the following characteristics: (i) learning must occur rapidly, (ii) learning samples are few or costly, (iii) computational resources are limited, and (iv) the learning problem changes over time. Insects deal with such complex tasks robustly despite limited computational power because learning is an active process emerging from the interaction of evolved brains, bodies and behaviours. Through a virtuous cycle of modelling and experiments, you will develop insect-inspired models, in which behavioural strategies and specialised sensors actively structure sensory input while selective attention drives learning to the most salient information. The cycle of modelling and experiments will be achieved through field work in both Sussex and Australia.
N/A
The Institute of Robotics and Cognitive Systems at the University of Lübeck has a vacancy for an Assistant Professorship (Juniorprofessur) Tenure Track W2 for Robotics for an initial period of three years with an option to extend for a further three years. The future holder of the position should represent the field of robotics in research and teaching. Furthermore, the holder of the professorship shall establish their own working group at the Institute of Robotics and Cognitive Systems. The future holder of the position should have a very good doctorate and demonstrable scientific experience in one or more of the following research areas: Modelling, simulation, and control of robots, Robot kinematics and dynamics, Robot sensor technology, e.g., force and moment sensor technology, Robotic systems, e.g., telerobotic systems, humanoid robots, etc., Soft robotics and continuum robotics, AI and machine learning methods in robotics, Human-robot collaboration and safe autonomous robot systems, AR/VR in robotics, Applications of AI and robotics in medicine. The range of tasks also includes the acquisition of third-party funds and the assumption of project management. The applicant is expected to be scientifically involved in the research focus areas of the institute and the profile areas of the university, especially in the context of projects acquired by the institute itself (public funding, industrial cooperations, etc.). The position holder is expected to be willing to cooperate with the “Lübeck Innovation Hub for Robotic Surgery” (LIROS), the 'Center for Doctoral Studies Lübeck' and the 'Open Lab for Robotics and Imaging in Industry and Medicine' (OLRIM). In teaching, participation in the degree programme 'Robotics and Autonomous Systems' (German-language Bachelor’s, English-language Master’s) as well as the other degree programmes of the university’s STEM sections is expected.
Eleonora Russo
One Ph.D. position is available within the National Ph.D. Program in ‘Theoretical and Applied Neuroscience’. The Ph.D. will be held in the Brain Dynamics Lab at the Biorobotics Institute of Sant'Anna School of Advanced Studies, Pisa (Italy) in collaboration with the Kelsch Group at the University Medical Center, Johannes Gutenberg University, Mainz (Germany). Understanding the dynamical systems governing neuronal activity is crucial for unraveling how the brain performs cognitive functions. Historically, various forms of recurrent neural networks (RNNs) have been proposed as simplified models of the cortex. Recently, due to remarkable advancements in machine learning, RNNs' ability to capture temporal dependencies has been used to develop tools for approximating unknown dynamical systems by training them on observed time-series data. This approach allows us to use time series of electrophysiological multi-single unit recordings as well as whole brain ultra-high field functional imaging (fMRI) to parametrize neuronal population dynamics and build functional models of cognitive functions. The objective of this research project is to investigate the neuronal mechanisms underlying the reinforcement and depreciation of perceived stimuli in the extended network of the mouse forebrain regions. The PhD student will carry out his/her/their studies primarily at the BioRobotics Institute of Sant'Anna School of Advanced Studies. The project will expose the student to a highly international and interdisciplinary context, in tight collaboration with theoretical and experimental neuroscientists in Italy and abroad. At the BioRobotics Institute, the research groups involved will be the Brain Dynamics Lab, the Computational Neuroengineering Lab, and the Bioelectronics and Bioengineering Area. Moreover, the project will be carried out in tight collaboration with the experimental group of Prof. Wolfgang Kelsch, Johannes Gutenberg University, Mainz, Germany. During the PhD, the student will have the opportunity to spend a period abroad.
Brad Wyble
The Department of Psychology at The Pennsylvania State University, University Park, PA, invites applications for a full-time Assistant or Associate Professor of Cognitive Psychology with anticipated start date of August, 2025. Areas of specialization within cognitive psychology are open and may include (but are not limited to) such topics as cognitive control, creativity, computational approaches and modelling, motor control, language science, memory, attention, perception, and decision making. A record of collaboration is desirable for both ranks. Substantial collaboration opportunities exist within the department that align with the department’s cross-cutting research themes and across campus. Current faculty in the cognitive area are active in units including the Center for Language Sciences, the Social Life and Engineering Sciences Imaging Center, the Center for Healthy Aging, the Center for Brain, Behavior, and Cognition and the Applied Research Lab. Responsibilities of the Assistant or Associate Professor of Cognitive Psychology include maintaining a strong record of publications in top outlets. This position will include resident instruction at the undergraduate and graduate level and normal university service, based on the candidate’s qualifications. A Ph.D. in Psychology or related field is required by the appointment date for both ranks. Candidates for the tenure-track Assistant Professor of Cognitive Psychology position must have demonstrated ability as a researcher, scholar, and teacher in a relevant field and have evidence of growth in scholarly achievement. Duties will involve a combination of teaching, research, and service, based on the candidate’s qualifications. Candidates for the tenure-track Associate Professor of Cognitive Psychology position must have demonstrated excellence as a researcher, scholar, and teacher in a relevant field and have an established reputation in scholarly achievement. Duties will involve a combination of teaching, research, and service, based on the candidate’s qualifications. The ideal candidate will have a strong record of publications in top outlets and have a history of or potential for external funding. In addition, successful candidates must either have demonstrated a commitment to building an inclusive, equitable, and diverse campus community, or describe one or more ways they would envision doing so, given the opportunity. Review of applications will begin immediately and will continue until the position is filled. Interested candidates should submit an online application at Penn State’s Job Posting Board, and should upload the following application materials electronically: (1) a Cover letter of application, (2) Concise statements of research and teaching interests, (3) a CV and (4) three selected (re)prints. System limitations allow for a total of 5 documents (5mb per document) as part of your application. Please combine materials to meet the 5-document limit. In addition, please arrange to have three letters of recommendation sent electronically to PsychApplications@psu.edu with the subject line: “Cognitive Psychology” Questions regarding the application process can be emailed to PsychApplications@psu.edu and questions regarding the position can be sent to the search chair: cogsearch@psu.edu. The Pennsylvania State University is committed to and accountable for advancing diversity, equity, and inclusion in all of its forms. We embrace individual uniqueness, foster a culture of inclusion that supports both broad and specific diversity initiatives, leverage the educational and institutional benefits of diversity, and engage all individuals to help them thrive. We value inclusion as a core strength and an essential element of our public service mission. Penn State offers competitive benefits to full-time employees, including medical, dental, vision, and retirement plans, in addition to 75% tuition discounts (including for a spouse and dependent children up to the age of 26) and paid holidays.
Neurobiological constraints on learning: bug or feature?
Understanding how brains learn requires bridging evidence across scales—from behaviour and neural circuits to cells, synapses, and molecules. In our work, we use computational modelling and data analysis to explore how the physical properties of neurons and neural circuits constrain learning. These include limits imposed by brain wiring, energy availability, molecular noise, and the 3D structure of dendritic spines. In this talk I will describe one such project testing if wiring motifs from fly brain connectomes can improve performance of reservoir computers, a type of recurrent neural network. The hope is that these insights into brain learning will lead to improved learning algorithms for artificial systems.
Computational modelling of ocular pharmacokinetics
Pharmacokinetics in the eye is an important factor for the success of ocular drug delivery and treatment. Pharmacokinetic features determine the feasible routes of drug administration, dosing levels and intervals, and it has impact on eventual drug responses. Several physical, biochemical, and flow-related barriers limit drug exposure of anterior and posterior ocular target tissues during treatment during local (topical, subconjunctival, intravitreal) and systemic administration (intravenous, per oral). Mathematical models integrate joint impact of various barriers on ocular pharmacokinetics (PKs) thereby helping drug development. The models are useful in describing (top-down) and predicting (bottom-up) pharmacokinetics of ocular drugs. This is useful also in the design and development of new drug molecules and drug delivery systems. Furthermore, the models can be used for interspecies translation and probing of disease effects on pharmacokinetics. In this lecture, ocular pharmacokinetics and current modelling methods (noncompartmental analyses, compartmental, physiologically based, and finite element models) are introduced. Future challenges are also highlighted (e.g. intra-tissue distribution, prediction of drug responses, active transport).
Screen Savers : Protecting adolescent mental health in a digital world
In our rapidly evolving digital world, there is increasing concern about the impact of digital technologies and social media on the mental health of young people. Policymakers and the public are nervous. Psychologists are facing mounting pressures to deliver evidence that can inform policies and practices to safeguard both young people and society at large. However, research progress is slow while technological change is accelerating.My talk will reflect on this, both as a question of psychological science and metascience. Digital companies have designed highly popular environments that differ in important ways from traditional offline spaces. By revisiting the foundations of psychology (e.g. development and cognition) and considering digital changes' impact on theories and findings, we gain deeper insights into questions such as the following. (1) How do digital environments exacerbate developmental vulnerabilities that predispose young people to mental health conditions? (2) How do digital designs interact with cognitive and learning processes, formalised through computational approaches such as reinforcement learning or Bayesian modelling?However, we also need to face deeper questions about what it means to do science about new technologies and the challenge of keeping pace with technological advancements. Therefore, I discuss the concept of ‘fast science’, where, during crises, scientists might lower their standards of evidence to come to conclusions quicker. Might psychologists want to take this approach in the face of technological change and looming concerns? The talk concludes with a discussion of such strategies for 21st-century psychology research in the era of digitalization.
Modelling the fruit fly brain and body
Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.
How are the epileptogenesis clocks ticking?
The epileptogenesis process is associated with large-scale changes in gene expression, which contribute to the remodelling of brain networks permanently altering excitability. About 80% of the protein coding genes are under the influence of the circadian rhythms. These are 24-hour endogenous rhythms that determine a large number of daily changes in physiology and behavior in our bodies. In the brain, the master clock regulates a large number of pathways that are important during epileptogenesis and established-epilepsy, such as neurotransmission, synaptic homeostasis, inflammation, blood-brain barrier among others. In-depth mapping of the molecular basis of circadian timing in the brain is key for a complete understanding of the cellular and molecular events connecting genes to phenotypes.
Mathematical and computational modelling of ocular hemodynamics: from theory to applications
Changes in ocular hemodynamics may be indicative of pathological conditions in the eye (e.g. glaucoma, age-related macular degeneration), but also elsewhere in the body (e.g. systemic hypertension, diabetes, neurodegenerative disorders). Thanks to its transparent fluids and structures that allow the light to go through, the eye offers a unique window on the circulation from large to small vessels, and from arteries to veins. Deciphering the causes that lead to changes in ocular hemodynamics in a specific individual could help prevent vision loss as well as aid in the diagnosis and management of diseases beyond the eye. In this talk, we will discuss how mathematical and computational modelling can help in this regard. We will focus on two main factors, namely blood pressure (BP), which drives the blood flow through the vessels, and intraocular pressure (IOP), which compresses the vessels and may impede the flow. Mechanism-driven models translates fundamental principles of physics and physiology into computable equations that allow for identification of cause-to-effect relationships among interplaying factors (e.g. BP, IOP, blood flow). While invaluable for causality, mechanism-driven models are often based on simplifying assumptions to make them tractable for analysis and simulation; however, this often brings into question their relevance beyond theoretical explorations. Data-driven models offer a natural remedy to address these short-comings. Data-driven methods may be supervised (based on labelled training data) or unsupervised (clustering and other data analytics) and they include models based on statistics, machine learning, deep learning and neural networks. Data-driven models naturally thrive on large datasets, making them scalable to a plethora of applications. While invaluable for scalability, data-driven models are often perceived as black- boxes, as their outcomes are difficult to explain in terms of fundamental principles of physics and physiology and this limits the delivery of actionable insights. The combination of mechanism-driven and data-driven models allows us to harness the advantages of both, as mechanism-driven models excel at interpretability but suffer from a lack of scalability, while data-driven models are excellent at scale but suffer in terms of generalizability and insights for hypothesis generation. This combined, integrative approach represents the pillar of the interdisciplinary approach to data science that will be discussed in this talk, with application to ocular hemodynamics and specific examples in glaucoma research.
Metabolic Remodelling in the Developing Forebrain in Health and Disease
Little is known about the critical metabolic changes that neural cells have to undergo during development and how temporary shifts in this program can influence brain circuitries and behavior. Motivated by the identification of autism-associated mutations in SLC7A5, a transporter for metabolically essential large neutral amino acids (LNAAs), we utilized metabolomic profiling to investigate the metabolic states of the cerebral cortex across various developmental stages. Our findings reveal significant metabolic restructuring occurring in the forebrain throughout development, with specific groups of metabolites exhibiting stage-specific changes. Through the manipulation of Slc7a5 expression in neural cells, we discovered an interconnected relationship between the metabolism of LNAAs and lipids within the cortex. Neuronal deletion of Slc7a5 influences the postnatal metabolic state, resulting in a shift in lipid metabolism and a cell-type-specific modification in neuronal activity patterns. This ultimately gives rise to enduring circuit dysfunction.
Face and voice perception as a tool for characterizing perceptual decisions and metacognitive abilities across the general population and psychosis spectrum
Humans constantly make perceptual decisions on human faces and voices. These regularly come with the challenge of receiving only uncertain sensory evidence, resulting from noisy input and noisy neural processes. Efficiently adapting one’s internal decision system including prior expectations and subsequent metacognitive assessments to these challenges is crucial in everyday life. However, the exact decision mechanisms and whether these represent modifiable states remain unknown in the general population and clinical patients with psychosis. Using data from a laboratory-based sample of healthy controls and patients with psychosis as well as a complementary, large online sample of healthy controls, I will demonstrate how a combination of perceptual face and voice recognition decision fidelity, metacognitive ratings, and Bayesian computational modelling may be used as indicators to differentiate between non-clinical and clinical states in the future.
From cells to systems: multiscale studies of the epileptic brain
It is increasingly recognized that epilepsy affects human brain organization across multiple scales, ranging from cellular alterations in specific regions towards macroscale network imbalances. My talk will overview an emerging paradigm that integrates cellular, neuroimaging, and network modelling approaches to faithful characterize the extent of structural and functional alterations in the common epilepsies. I will also discuss how multiscale framework can help to derive clinically useful biomarkers of dysfunction, and how these methods may guide surgical planning and prognostics.
Bridging machine learning and mechanistic modelling
Central place foraging: how insects anchor spatial information
Many insect species maintain a nest around which their foraging behaviour is centered, and can use path integration to maintain an accurate estimate of their distance and direction (a vector) to their nest. Some species, such as bees and ants, can also store the vector information for multiple salient locations in the world, such as food sources, in a common coordinate system. They can also use remembered views of the terrain around salient locations or along travelled routes to guide return. Recent modelling of these abilities shows convergence on a small set of algorithms and assumptions that appear sufficient to account for a wide range of behavioural data, and which can be mapped to specific insect brain circuits. Notably, this does not include any significant topological knowledge: the insect does not need to recover the information (implicit in their vector memory) about the relationships between salient places; nor to maintain any connectedness or ordering information between view memories; nor to form any associations between views and vectors. However, there remains some experimental evidence not fully explained by these algorithms that may point towards the existence of a more complex or integrated mental map in insects.
Fidelity and Replication: Modelling the Impact of Protocol Deviations on Effect Size
Cognitive science and cognitive neuroscience researchers have agreed that the replication of findings is important for establishing which ideas (or theories) are integral to the study of cognition across the lifespan. Recently, high-profile papers have called into question findings that were once thought to be unassailable. Much attention has been paid to how p-hacking, publication bias, and sample size are responsible for failed replications. However, much less attention has been paid to the fidelity by which researchers enact study protocols. Researchers conducting education or clinical trials are aware of the importance in fidelity – or the extent to which the protocols are delivered in the same way across participants. Nevertheless, this idea has not been applied to cognitive contexts. This seminar discusses factors that impact the replicability of findings alongside recent models suggesting that even small fidelity deviations have real impacts on the data collected.
Programmed axon death: from animal models into human disease
Programmed axon death is a widespread and completely preventable mechanism in injury and disease. Mouse and Drosophila studies define a molecular pathway involving activation of SARM1 NA Dase and its prevention by NAD synthesising enzyme NMNAT2 . Loss of axonal NMNAT2 causes its substrate, NMN , to accumulate and activate SARM1 , driving loss of NAD and changes in ATP , ROS and calcium. Animal models caused by genetic mutation, toxins, viruses or metabolic defects can be alleviated by blocking programmed axon death, for example models of CMT1B , chemotherapy-induced peripheral neuropathy (CIPN), rabies and diabetic peripheral neuropathy (DPN). The perinatal lethality of NMNAT2 null mice is completely rescued, restoring a normal, healthy lifespan. Animal models lack the genetic and environmental diversity present in human populations and this is problematic for modelling gene-environment combinations, for example in CIPN and DPN , and identifying rare, pathogenic mutations. Instead, by testing human gene variants in WGS datasets for loss- and gain-of-function, we identified enrichment of rare SARM1 gain-of-function variants in sporadic ALS , despite previous negative findings in SOD1 transgenic mice. We have shown in mice that heterozygous SARM1 loss-of-function is protective from a range of axonal stresses and that naturally-occurring SARM1 loss-of-function alleles are present in human populations. This enables new approaches to identify disorders where blocking SARM1 may be therapeutically useful, and the existence of two dominant negative human variants in healthy adults is some of the best evidence available that drugs blocking SARM1 are likely to be safe. Further loss- and gain-of-function variants in SARM1 and NMNAT2 are being identified and used to extend and strengthen the evidence of association with neurological disorders. We aim to identify diseases, and specific patients, in whom SARM1 -blocking drugs are most likely to be effective.
Modelling metaphor comprehension as a form of analogizing
What do people do when they comprehend language in discourse? According to many psychologists, they build and maintain cognitive representations of utterances in four complementary mental models for discourse that interact with each other: the surface text, the text base, the situation model, and the context model. When people encounter metaphors in these utterances, they need to incorporate them into each of these mental representations for the discourse. Since influential metaphor theories define metaphor as a form of (figurative) analogy, involving cross-domain mapping of a smaller or greater extent, the general expectation has been that metaphor comprehension is also based on analogizing. This expectation, however, has been partly borne out by the data, but not completely. There is no one-to-one relationship between metaphor as (conceptual) structure (analogy) and metaphor as (psychological) process (analogizing). According to Deliberate Metaphor Theory (DMT), only some metaphors are handled by analogy. Instead, most metaphors are presumably handled by lexical disambiguation. This is a hypothesis that brings together most metaphor research in a provocatively new way: it means that most metaphors are not processed metaphorically, which produces a paradox of metaphor. In this talk I will sketch out how this paradox arises and how it can be resolved by a new version of DMT, which I have described in my forthcoming book Slowing metaphor down: Updating Deliberate Metaphor Theory (currently under review). In this theory, the distinction between, but also the relation between, analogy in metaphorical structure versus analogy in metaphorical process is of central importance.
Neural circuits for vector processing in the insect brain
Several species of insects have been observed to perform accurate path integration, constantly updating a vector memory of their location relative to a starting position, which they can use to take a direct return path. Foraging insects such as bees and ants are also able to store and recall the vectors to return to food locations, and to take novel shortcuts between these locations. Other insects, such as dung beetles, are observed to integrate multimodal directional cues in a manner well described by vector addition. All these processes appear to be functions of the Central Complex, a highly conserved and strongly structured circuit in the insect brain. Modelling this circuit, at the single neuron level, suggests it has general capabilities for vector encoding, vector memory, vector addition and vector rotation that can support a wide range of directed and navigational behaviours.
Beyond Biologically Plausible Spiking Networks for Neuromorphic Computing
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features – event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST.
General purpose event-based architectures for deep learning
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features -- event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST
Internally Organized Abstract Task Maps in the Mouse Medial Frontal Cortex
New tasks are often similar in structure to old ones. Animals that take advantage of such conserved or “abstract” task structures can master new tasks with minimal training. To understand the neural basis of this abstraction, we developed a novel behavioural paradigm for mice: the “ABCD” task, and recorded from their medial frontal neurons as they learned. Animals learned multiple tasks where they had to visit 4 rewarded locations on a spatial maze in sequence, which defined a sequence of four “task states” (ABCD). Tasks shared the same circular transition structure (… ABCDABCD …) but differed in the spatial arrangement of rewards. As well as improving across tasks, mice inferred that A followed D (i.e. completed the loop) on the very first trial of a new task. This “zero-shot inference” is only possible if animals had learned the abstract structure of the task. Across tasks, individual medial Frontal Cortex (mFC) neurons maintained their tuning to the phase of an animal’s trajectory between rewards but not their tuning to task states, even in the absence of spatial tuning. Intriguingly, groups of mFC neurons formed modules of coherently remapping neurons that maintained their tuning relationships across tasks. Such tuning relationships were expressed as replay/preplay during sleep, consistent with an internal organisation of activity into multiple, task-matched ring attractors. Remarkably, these modules were anchored to spatial locations: neurons were tuned to specific task space “distances” from a particular spatial location. These newly discovered “Spatially Anchored Task clocks” (SATs), suggest a novel algorithm for solving abstraction tasks. Using computational modelling, we show that SATs can perform zero-shot inference on new tasks in the absence of plasticity and guide optimal policy in the absence of continual planning. These findings provide novel insights into the Frontal mechanisms mediating abstraction and flexible behaviour.
A model of colour appearance based on efficient coding of natural images
An object’s colour, brightness and pattern are all influenced by its surroundings, and a number of visual phenomena and “illusions” have been discovered that highlight these often dramatic effects. Explanations for these phenomena range from low-level neural mechanisms to high-level processes that incorporate contextual information or prior knowledge. Importantly, few of these phenomena can currently be accounted for when measuring an object’s perceived colour. Here we ask to what extent colour appearance is predicted by a model based on the principle of coding efficiency. The model assumes that the image is encoded by noisy spatio-chromatic filters at one octave separations, which are either circularly symmetrical or oriented. Each spatial band’s lower threshold is set by the contrast sensitivity function, and the dynamic range of the band is a fixed multiple of this threshold, above which the response saturates. Filter outputs are then reweighted to give equal power in each channel for natural images. We demonstrate that the model fits human behavioural performance in psychophysics experiments, and also primate retinal ganglion responses. Next we systematically test the model’s ability to qualitatively predict over 35 brightness and colour phenomena, with almost complete success. This implies that contrary to high-level processing explanations, much of colour appearance is potentially attributable to simple mechanisms evolved for efficient coding of natural images, and is a basis for modelling the vision of humans and other animals.
Can I be bothered? Neural and computational mechanisms underlying the dynamics of effort processing (BACN Early-career Prize Lecture 2021)
From a workout at the gym to helping a colleague with their work, everyday we make decisions about whether we are willing to exert effort to obtain some sort of benefit. Increases in how effortful actions and cognitive processes are perceived to be has been linked to clinically severe impairments to motivation, such as apathy and fatigue, across many neurological and psychiatric conditions. However, the vast majority of neuroscience research has focused on understanding the benefits for acting, the rewards, and not on the effort required. As a result, the computational and neural mechanisms underlying how effort is processed are poorly understood. How do we compute how effortful we perceive a task to be? How does this feed into our motivation and decisions of whether to act? How are such computations implemented in the brain? and how do they change in different environments? I will present a series of studies examining these questions using novel behavioural tasks, computational modelling, fMRI, pharmacological manipulations, and testing in a range of different populations. These studies highlight how the brain represents the costs of exerting effort, and the dynamic processes underlying how our sensitivity to effort changes as a function of our goals, traits, and socio-cognitive processes. This work provides new computational frameworks for understanding and examining impaired motivation across psychiatric and neurological conditions, as well as why all of us, sometimes, can’t be bothered.
Computational modelling of neurotransmitter release
Synaptic transmission provides the basis for neuronal communication. When an action-potential propagates through the axonal arbour, it activates voltage-gated Ca2+ channels located in the vicinity of release-ready synaptic vesicles docked at the presynaptic active zone. Ca2+ ions enter the presynaptic terminal and activate the vesicular Ca2+ sensor, thereby triggering neurotransmitter release. This whole process occurs on a timescale of a few milliseconds. In addition to fast, synchronous release, which keeps pace with action potentials, many synapses also exhibit delayed asynchronous release that persists for tens to hundreds of milliseconds. In this talk I will demonstrate how experimentally constrained computational modelling of underlying biological processes can complement laboratory studies (using electrophysiology and imaging techniques) and provide insights into the mechanisms of synaptic transmission.
The Problem of Testimony
The talk will detail work drawing on behavioural results, formal analysis, and computational modelling with agent-based simulations to unpack the scale of the challenge humans face when trying to work out and factor in the reliability of their sources. In particular, it is shown how and why this task admits of no easy solution in the context of wider communication networks, and how this will affect the accuracy of our beliefs. The implications of this for the shift in the size and topology of our communication networks through the uncontrolled rise of social media are discussed.
Do Capuchin Monkeys, Chimpanzees and Children form Overhypotheses from Minimal Input? A Hierarchical Bayesian Modelling Approach
Abstract concepts are a powerful tool to store information efficiently and to make wide-ranging predictions in new situations based on sparse data. Whereas looking-time studies point towards an early emergence of this ability in human infancy, other paradigms like the relational match to sample task often show a failure to detect abstract concepts like same and different until the late preschool years. Similarly, non-human animals have difficulties solving those tasks and often succeed only after long training regimes. Given the huge influence of small task modifications, there is an ongoing debate about the conclusiveness of these findings for the development and phylogenetic distribution of abstract reasoning abilities. Here, we applied the concept of “overhypotheses” which is well known in the infant and cognitive modeling literature to study the capabilities of 3 to 5-year-old children, chimpanzees, and capuchin monkeys in a unified and more ecologically valid task design. In a series of studies, participants themselves sampled reward items from multiple containers or witnessed the sampling process. Only when they detected the abstract pattern governing the reward distributions within and across containers, they could optimally guide their behavior and maximize the reward outcome in a novel test situation. We compared each species’ performance to the predictions of a probabilistic hierarchical Bayesian model capable of forming overhypotheses at a first and second level of abstraction and adapted to their species-specific reward preferences.
One by one: brain organoid modelling of neurodevelopmental disorders at single cell resolution
NMC4 Short Talk: Predictive coding is a consequence of energy efficiency in recurrent neural networks
Predictive coding represents a promising framework for understanding brain function, postulating that the brain continuously inhibits predictable sensory input, ensuring a preferential processing of surprising elements. A central aspect of this view on cortical computation is its hierarchical connectivity, involving recurrent message passing between excitatory bottom-up signals and inhibitory top-down feedback. Here we use computational modelling to demonstrate that such architectural hard-wiring is not necessary. Rather, predictive coding is shown to emerge as a consequence of energy efficiency, a fundamental requirement of neural processing. When training recurrent neural networks to minimise their energy consumption while operating in predictive environments, the networks self-organise into prediction and error units with appropriate inhibitory and excitatory interconnections and learn to inhibit predictable sensory input. We demonstrate that prediction units can reliably be identified through biases in their median preactivation, pointing towards a fundamental property of prediction units in the predictive coding framework. Moving beyond the view of purely top-down driven predictions, we demonstrate via virtual lesioning experiments that networks perform predictions on two timescales: fast lateral predictions among sensory units and slower prediction cycles that integrate evidence over time. Our results, which replicate across two separate data sets, suggest that predictive coding can be interpreted as a natural consequence of energy efficiency. More generally, they raise the question which other computational principles of brain function can be understood as a result of physical constraints posed by the brain, opening up a new area of bio-inspired, machine learning-powered neuroscience research.
NMC4 Keynote: Formation and update of sensory priors in working memory and perceptual decision making tasks
The world around us is complex, but at the same time full of meaningful regularities. We can detect, learn and exploit these regularities automatically in an unsupervised manner i.e. without any direct instruction or explicit reward. For example, we effortlessly estimate the average tallness of people in a room, or the boundaries between words in a language. These regularities and prior knowledge, once learned, can affect the way we acquire and interpret new information to build and update our internal model of the world for future decision-making processes. Despite the ubiquity of passively learning from the structured information in the environment, the mechanisms that support learning from real-world experience are largely unknown. By combing sophisticated cognitive tasks in human and rats, neuronal measurements and perturbations in rat and network modelling, we aim to build a multi-level description of how sensory history is utilised in inferring regularities in temporally extended tasks. In this talk, I will specifically focus on a comparative rat and human model, in combination with neural network models to study how past sensory experiences are utilized to impact working memory and decision making behaviours.
NMC4 Short Talk: Brain-inspired spiking neural network controller for a neurorobotic whisker system
It is common for animals to use self-generated movements to actively sense the surrounding environment. For instance, rodents rhythmically move their whiskers to explore the space close to their body. The mouse whisker system has become a standard model to study active sensing and sensorimotor integration through feedback loops. In this work, we developed a bioinspired spiking neural network model of the sensorimotor peripheral whisker system, modelling trigeminal ganglion, trigeminal nuclei, facial nuclei, and central pattern generator neuronal populations. This network was embedded in a virtual mouse robot, exploiting the Neurorobotics Platform, a simulation platform offering a virtual environment to develop and test robots driven by brain-inspired controllers. Eventually, the peripheral whisker system was properly connected to an adaptive cerebellar network controller. The whole system was able to drive active whisking with learning capability, matching neural correlates of behaviour experimentally recorded in mice.
NMC4 Short Talk: Sensory intermixing of mental imagery and perception
Several lines of research have demonstrated that internally generated sensory experience - such as during memory, dreaming and mental imagery - activates similar neural representations as externally triggered perception. This overlap raises a fundamental challenge: how is the brain able to keep apart signals reflecting imagination and reality? In a series of online psychophysics experiments combined with computational modelling, we investigated to what extent imagination and perception are confused when the same content is simultaneously imagined and perceived. We found that simultaneous congruent mental imagery consistently led to an increase in perceptual presence responses, and that congruent perceptual presence responses were in turn associated with a more vivid imagery experience. Our findings can be best explained by a simple signal detection model in which imagined and perceived signals are added together. Perceptual reality monitoring can then easily be implemented by evaluating whether this intermixed signal is strong or vivid enough to pass a ‘reality threshold’. Our model suggests that, in contrast to self-generated sensory changes during movement, our brain does not discount self-generated sensory signals during mental imagery. This has profound implications for our understanding of reality monitoring and perception in general.
NMC4 Short Talk: Directly interfacing brain and deep networks exposes non-hierarchical visual processing
A recent approach to understanding the mammalian visual system is to show correspondence between the sequential stages of processing in the ventral stream with layers in a deep convolutional neural network (DCNN), providing evidence that visual information is processed hierarchically, with successive stages containing ever higher-level information. However, correspondence is usually defined as shared variance between brain region and model layer. We propose that task-relevant variance is a stricter test: If a DCNN layer corresponds to a brain region, then substituting the model’s activity with brain activity should successfully drive the model’s object recognition decision. Using this approach on three datasets (human fMRI and macaque neuron firing rates) we found that in contrast to the hierarchical view, all ventral stream regions corresponded best to later model layers. That is, all regions contain high-level information about object category. We hypothesised that this is due to recurrent connections propagating high-level visual information from later regions back to early regions, in contrast to the exclusively feed-forward connectivity of DCNNs. Using task-relevant correspondence with a late DCNN layer akin to a tracer, we used Granger causal modelling to show late-DCNN correspondence in IT drives correspondence in V4. Our analysis suggests, effectively, that no ventral stream region can be appropriately characterised as ‘early’ beyond 70ms after stimulus presentation, challenging hierarchical models. More broadly, we ask what it means for a model component and brain region to correspond: beyond quantifying shared variance, we must consider the functional role in the computation. We also demonstrate that using a DCNN to decode high-level conceptual information from ventral stream produces a general mapping from brain to model activation space, which generalises to novel classes held-out from training data. This suggests future possibilities for brain-machine interface with high-level conceptual information, beyond current designs that interface with the sensorimotor periphery.
Generative models of brain function: Inference, networks, and mechanisms
This talk will focus on the generative modelling of resting state time series or endogenous neuronal activity. I will survey developments in modelling distributed neuronal fluctuations – spectral dynamic causal modelling (DCM) for functional MRI – and how this modelling rests upon functional connectivity. The dynamics of brain connectivity has recently attracted a lot of attention among brain mappers. I will also show a novel method to identify dynamic effective connectivity using spectral DCM. Further, I will summarise the development of the next generation of DCMs towards large-scale, whole-brain schemes which are computationally inexpensive, to the other extreme of the development using more sophisticated and biophysically detailed generative models based on the canonical microcircuits.
The wonders and complexities of brain microstructure: Enabling biomedical engineering studies combining imaging and models
Brain microstructure plays a key role in driving the transport of drug molecules directly administered to the brain tissue as in Convection-Enhanced Delivery procedures. This study reports the first systematic attempt to characterize the cytoarchitecture of commissural, long association and projection fiber, namely: the corpus callosum, the fornix and the corona radiata. Ovine samples from three different subjects have been imaged using scanning electron microscope combined with focused ion beam milling. Particular focus has been given to the axons. For each tract, a 3D reconstruction of relatively large volumes (including a significant number of axons) has been performed. Namely, outer axonal ellipticity, outer axonal cross-sectional area and its relative perimeter have been measured. This study [1] provides useful insight into the fibrous organization of the tissue that can be described as composite material presenting elliptical tortuous tubular fibers, leading to a workflow to enable accurate simulations of drug delivery which include well-resolved microstructural features. As a demonstration of the use of these imaging and reconstruction techniques, our research analyses the hydraulic permeability of two white matter (WM) areas (corpus callosum and fornix) whose three-dimensional microstructure was reconstructed starting from the acquisition of the electron microscopy images. Considering that the white matter structure is mainly composed of elongated and parallel axons we computed the permeability along the parallel and perpendicular directions using computational fluid dynamics [2]. The results show a statistically significant difference between parallel and perpendicular permeability, with a ratio about 2 in both the white matter structures analysed, thus demonstrating their anisotropic behaviour. This is in line with the experimental results obtained using perfusion of brain matter [3]. Moreover, we find a significant difference between permeability in corpus callosum and fornix, which suggests that also the white matter heterogeneity should be considered when modelling drug transport in the brain. Our findings, that demonstrate and quantify the anisotropic and heterogeneous character of the white matter, represent a fundamental contribution not only for drug delivery modelling but also for shedding light on the interstitial transport mechanisms in the extracellular space. These and many other discoveries will be discussed during the talk." "1. https://www.researchsquare.com/article/rs-686577/v1, 2. https://www.pnas.org/content/118/36/e2105328118, 3. https://ieeexplore.ieee.org/abstract/document/9198110
Neural mechanisms of altered states of consciousness under psychedelics
Interest in psychedelic compounds is growing due to their remarkable potential for understanding altered neural states and their breakthrough status to treat various psychiatric disorders. However, there are major knowledge gaps regarding how psychedelics affect the brain. The Computational Neuroscience Laboratory at the Turner Institute for Brain and Mental Health, Monash University, uses multimodal neuroimaging to test hypotheses of the brain’s functional reorganisation under psychedelics, informed by the accounts of hierarchical predictive processing, using dynamic causal modelling (DCM). DCM is a generative modelling technique which allows to infer the directed connectivity among brain regions using functional brain imaging measurements. In this webinar, Associate Professor Adeel Razi and PhD candidate Devon Stoliker will showcase a series of previous and new findings of how changes to synaptic mechanisms, under the control of serotonin receptors, across the brain hierarchy influence sensory and associative brain connectivity. Understanding these neural mechanisms of subjective and therapeutic effects of psychedelics is critical for rational development of novel treatments and for the design and success of future clinical trials. Associate Professor Adeel Razi is a NHMRC Investigator Fellow and CIFAR Azrieli Global Scholar at the Turner Institute of Brain and Mental Health, Monash University. He performs cross-disciplinary research combining engineering, physics, and machine-learning. Devon Stoliker is a PhD candidate at the Turner Institute for Brain and Mental Health, Monash University. His interest in consciousness and psychiatry has led him to investigate the neural mechanisms of classic psychedelic effects in the brain.
Transdiagnostic approaches to understanding neurodevelopment
Macroscopic brain organisation emerges early in life, even prenatally, and continues to develop through adolescence and into early adulthood. The emergence and continual refinement of large-scale brain networks, connecting neuronal populations across anatomical distance, allows for increasing functional integration and specialisation. This process is thought crucial for the emergence of complex cognitive processes. But how and why is this process so diverse? We used structural neuroimaging collected from a large diverse cohort, to explore how different features of macroscopic brain organisation are associated with diverse cognitive trajectories. We used diffusion-weighted imaging (DWI) to construct whole-brain white-matter connectomes. A simulated attack on each child's connectome revealed that some brain networks were strongly organized around highly connected 'hubs'. The more children's brains were critically dependent on hubs, the better their cognitive skills. Conversely, having poorly integrated hubs was a very strong risk factor for cognitive and learning difficulties across the sample. We subsequently developed a computational framework, using generative network modelling (GNM), to model the emergence of this kind of connectome organisation. Relatively subtle changes within the wiring rules of this computational framework give rise to differential developmental trajectories, because of small biases in the preferential wiring properties of different nodes within the network. Finally, we were able to use this GNM to implicate the molecular and cellular processes that govern these different growth patterns.
Qualitative Structure, Automorphism Groups and Private Language
It is generally agreed upon that qualities of conscious experience instantiate structural properties, usually called relations. They furnish a representation of qualities (or qualia, in fact) in terms of a mathematical space Q (rather than a set), which is crucial to both modelling and measuring of conscious experience." "What is usually disregarded is that “only such structural properties generalize across individuals” (Austen Clark), but that qualities themselves as differentiated by stimulus specifications, behavior or reports do not. We show that this implies that only the part of Q which is invariant with respect to the automorphism group has a well-defined referent, while individual elements do not. This poses a prima facie limitation of any theory or experiment that aims to address individual qualities. We show how mathematical theories of consciousness can overcome this limitation via symmetry groups and group actions, making accessible to science what is properly called private language.
In vitro bioelectronic models of the gut-brain axis
The human gut microbiome has emerged as a key player in the bidirectional communication of the gut-brain axis, affecting various aspects of homeostasis and pathophysiology. Until recently, the majority of studies that seek to explore the mechanisms underlying the microbiome-gut-brain axis cross-talk relied almost exclusively on animal models, and particularly gnotobiotic mice. Despite the great progress made with these models, various limitations, including ethical considerations and interspecies differences that limit the translatability of data to human systems, pushed researchers to seek for alternatives. Over the past decades, the field of in vitro modelling of tissues has experienced tremendous growth, thanks to advances in 3D cell biology, materials, science and bioengineering, pushing further the borders of our ability to more faithfully emulate the in vivo situation. Organ-on-chip technology and bioengineered tissues have emerged as highly promising alternatives to animal models for a wide range of applications. In this talk I’ll discuss our progress towards generating a complete platform of the human microbiota-gut-brain axis with integrated monitoring and sensing capabilities. Bringing together principles of materials science, tissue engineering, 3D cell biology and bioelectronics, we are building advanced models of the GI and the BBB /NVU, with real-time and label-free monitoring units adapted in the model architecture, towards a robust and more physiologically relevant human in vitro model, aiming to i) elucidate the role of microbiota in the gut-brain axis communication, ii) to study how diet and impaired microbiota profiles affect various (patho-)physiologies, and iii) to test personalised medicine approaches for disease modelling and drug testing.
Plasticity and learning in multisensory perception for action
Understanding the role of prediction in sensory encoding
At any given moment the brain receives more sensory information than it can use to guide adaptive behaviour, creating the need for mechanisms that promote efficient processing of incoming sensory signals. One way in which the brain might reduce its sensory processing load is to encode successive presentations of the same stimulus in a more efficient form, a process known as neural adaptation. Conversely, when a stimulus violates an expected pattern, it should evoke an enhanced neural response. Such a scheme for sensory encoding has been formalised in predictive coding theories, which propose that recent experience establishes expectations in the brain that generate prediction errors when violated. In this webinar, Professor Jason Mattingley will discuss whether the encoding of elementary visual features is modulated when otherwise identical stimuli are expected or unexpected based upon the history of stimulus presentation. In humans, EEG was employed to measure neural activity evoked by gratings of different orientations, and multivariate forward modelling was used to determine how orientation selectivity is affected for expected versus unexpected stimuli. In mice, two-photon calcium imaging was used to quantify orientation tuning of individual neurons in the primary visual cortex to expected and unexpected gratings. Results revealed enhanced orientation tuning to unexpected visual stimuli, both at the level of whole-brain responses and for individual visual cortex neurons. Professor Mattingley will discuss the implications of these findings for predictive coding theories of sensory encoding. Professor Jason Mattingley is a Laureate Fellow and Foundation Chair in Cognitive Neuroscience at The University of Queensland. His research is directed toward understanding the brain processes that support perception, selective attention and decision-making, in health and disease.
Bacterial rheotaxis in bulk and at surfaces
Individual bacteria transported in viscous flows, show complex interactions with flows and bounding surfaces resulting from their complex shape as well as their activity. Understanding these transport dynamics is crucial, as they impact soil contamination, transport in biological conducts or catheters, and constitute thus a serious health threat. Here we investigate the trajectories of individual E-coli bacteria in confined geometries under flow, using microfluidic model systems in bulk flows as well as close to surfaces using a novel Langrangian 3D tracking method. Combining experimental observations and modelling we elucidate the origin of upstream swimming, lateral drift or persistent transport along corners. [1] Junot et al, EPL, 126 (2019) 44003 [2] Mathijssen et al. 10:3 (2019) Nature Comm. [3] Figueroa-Morales et al., Soft Matter, 2015,11, 6284-6293 [4] Darnige et al. Review of Scientific Instruments 88, 055106 (2017) [5] Jing et al, Science Advances, 2020; 6 : eabb2012 [6] Figueroa-Morales et al, Sci. Adv. 2020; 6 : eaay0155, 2020, 10.1126/sciadv.aay0155
Learning under uncertainty in autism and anxiety
Optimally interacting with a changeable and uncertain world requires estimating and representing uncertainty. Psychiatric and neurodevelopmental conditions such as anxiety and autism are characterized by an altered response to uncertainty. I will review the evidence for these phenomena from computational modelling, and outline the planned experiments from our lab to add further weight to these ideas. If time allows, I will present results from a control sample in a novel task interrogating a particular type of uncertainty and their associated transdiagnostic psychiatric traits.
Data-driven reduction of dendritic morphologies with preserved dendro-somatic responses
There is little consensus on the level of spatial complexity at which dendrites operate. On the one hand, emergent evidence indicates that synapses cluster at micrometer spatial scales. On the other hand, most modelling and network studies ignore dendrites altogether. This dichotomy raises an urgent question: what is the smallest relevant spatial scale for understanding dendritic computation? We have developed a method to construct compartmental models at any level of spatial complexity. Through carefully chosen parameter fits, solvable in the least-squares sense, we obtain accurate reduced compartmental models. Thus, we are able to systematically construct passive as well as active dendrite models at varying degrees of spatial complexity. We evaluate which elements of the dendritic computational repertoire are captured by these models. We show that many canonical elements of the dendritic computational repertoire can be reproduced with few compartments. For instance, for a model to behave as a two-layer network, it is sufficient to fit a reduced model at the soma and at locations at the dendritic tips. In the basal dendrites of an L2/3 pyramidal model, we reproduce the backpropagation of somatic action potentials (APs) with a single dendritic compartment at the tip. Further, we obtain the well-known Ca-spike coincidence detection mechanism in L5 Pyramidal cells with as few as eleven compartments, the requirement being that their spacing along the apical trunk supports AP backpropagation. We also investigate whether afferent spatial connectivity motifs admit simplification by ablating targeted branches and grouping affected synapses onto the next proximal dendrite. We find that voltage in the remaining branches is reproduced if temporal conductance fluctuations stay below a limit that depends on the average difference in input resistance between the ablated branches and the next proximal dendrite. Consequently, when the average conductance load on distal synapses is constant, the dendritic tree can be simplified while appropriately decreasing synaptic weights. When the conductance level fluctuates strongly, for instance through a-priori unpredictable fluctuations in NMDA activation, a constant weight rescale factor cannot be found, and the dendrite cannot be simplified. We have created an open source Python toolbox (NEAT - https://neatdend.readthedocs.io/en/latest/) that automatises the simplification process. A NEST implementation of the reduced models, currently under construction, will enable the simulation of few-compartment models in large-scale networks, thus bridging the gap between cellular and network level neuroscience.
Generative models of the human connectome
The human brain is a complex network of neuronal connections. The precise arrangement of these connections, otherwise known as the topology of the network, is crucial to its functioning. Recent efforts to understand how the complex topology of the brain has emerged have used generative mathematical models, which grow synthetic networks according to specific wiring rules. Evidence suggests that a wiring rule which emulates a trade-off between connection costs and functional benefits can produce networks that capture essential topological properties of brain networks. In this webinar, Professor Alex Fornito and Dr Stuart Oldham will discuss these previous findings, as well as their own efforts in creating more physiologically constrained generative models. Professor Alex Fornito is Head of the Brain Mapping and Modelling Research Program at the Turner Institute for Brain and Mental Health. His research focuses on developing new imaging techniques for mapping human brain connectivity and applying these methods to shed light on brain function in health and disease. Dr Stuart Oldham is a Research Fellow at the Turner Institute for Brain and Mental Health and a Research Officer at the Murdoch Children’s Research Institute. He is interested in characterising the organisation of human brain networks, with particular focus on how this organisation develops, using neuroimaging and computational tools.
AI-guided solutions for early detection of neurodegenerative disorders
Despite the importance of early diagnosis of dementia for prognosis and personalised interventions, we still lack robust tools for predicting individual progression to dementia. We propose a trajectory modelling approach that mines multimodal data from patients at early dementia stages to derive individualised prognostic scores of cognitive decline Our approach has potential to facilitate effective stratification of individuals based on prognostic disease trajectories, reducing patient misclassification with important implications for clinical practice.
Mathematical models of neurodegenerative diseases
Neurodegenerative diseases such as Alzheimer’s or Parkinson’s are devastating conditions with poorly understood mechanisms and no cure. Yet, a striking feature of these conditions is the characteristic pattern of invasion throughout the brain, leading to well-codified disease stages associated with various cognitive deficits and pathologies. How can we use mathematical modelling to gain insight into this process and, doing so, gain understanding about how the brain works? In this talk, I will show that by linking new mathematical theories to recent progress in imaging, we can unravel some of the universal features associated with dementia and, more generally, brain functions.
Fragility of the human connectome across the lifespan
The human brain network architecture can reveal crucial aspects of brain function and dysfunction. The topology of this network (known as the connectome) is shaped by a trade-off between wiring cost and network efficiency, and it has highly connected hub regions playing a prominent role in many brain disorders. By studying a landscape of plausible brain networks that preserve the wiring cost, fragile and resilient hubs can be identified. In this webinar, Dr Leonardo Gollo and Dr James Pang from Monash University will discuss this approach across the lifespan and some of its implications for neurodevelopmental and neurodegenerative diseases. Dr Leonardo Gollo is a Senior Research Fellow at the Turner Institute for Brain and Mental Health, School of Psychological Sciences, Monash University. He holds an ARC Future Fellowship and his research interests include brain modelling, systems neuroscience, and connectomics. Dr James Pang is a Research Fellow at the Turner Institute for Brain and Mental Health, School of Psychological Sciences, Monash University. His research interests are on combining neuroimaging and biophysical modelling to better understand the mechanisms of brain function in health and disease.
Computational psychophysics at the intersection of theory, data and models
Behavioural measurements are often overlooked by computational neuroscientists, who prefer to focus on electrophysiological recordings or neuroimaging data. This attitude is largely due to perceived lack of depth/richness in relation to behavioural datasets. I will show how contemporary psychophysics can deliver extremely rich and highly constraining datasets that naturally interface with computational modelling. More specifically, I will demonstrate how psychophysics can be used to guide/constrain/refine computational models, and how models can be exploited to design/motivate/interpret psychophysical experiments. Examples will span a wide range of topics (from feature detection to natural scene understanding) and methodologies (from cascade models to deep learning architectures).
From genetics to neurobiology through transcriptomic data analysis
Over the past years, genetic studies have uncovered hundreds of genetic variants to be associated with complex brain disorders. While this really represents a big step forward in understanding the genetic etiology of brain disorders, the functional interpretation of these variants remains challenging. We aim to help with the functional characterization of variants through transcriptomic data analysis. For instance, we rely on brain transcriptome atlases, such as Allen Brain Atlases, to infer functional relations between genes. One example of this is the identification of signaling mechanisms of steroid receptors. Further, by integrating brain transcriptome atlases with neuropathology and neuroimaging data, we identify key genes and pathways associated with brain disorders (e.g. Parkinson's disease). With technological advances, we can now profile gene expression in single-cells at large scale. These developments have presented significant computational developments. Our lab focuses on developing scalable methods to identify cells in single-cell data through interactive visualization, scalable clustering, classification, and interpretable trajectory modelling. We also work on methods to integrate single-cell data across studies and technologies.
Error correction and reliability timescale in converging cortical networks
Rapidly changing inputs such as visual scenes and auditory landscapes are transmitted over several synaptic interfaces and perceived with little loss of detail, but individual neurons are typically “noisy” and cortico-cortical connections are typically “weak”. To understand how information embodied in spike train is transmitted in a lossless manner, we focus on a single synaptic interface: between pyramidal cells and putative interneurons. Using arbitrary white noise patterns injected intra-cortically as photocurrents to freely-moving mice, we find that directly-activated cells exhibit precision of several milliseconds, but post-synaptic, indirectly-activated cells exhibit higher precision. Considering multiple identical messages, the reliability of directly-activated cells peaks at a timescale of dozens of milliseconds, whereas indirectly-activated cells exhibit an order-of-magnitude faster timescale. Using data-driven modelling, we find that error correction is consistent with non-linear amplification of coincident spikes.
How dendrites help solve biological and machine learning problems
Dendrites are thin processes that extend from the cell body of neurons, the main computing units of the brain. The role of dendrites in complex brain functions has been investigated for several decades, yet their direct involvement in key behaviors such as for example sensory perception has only recently been established. In my presentation I will discuss how computational modelling has helped us illuminate dendritic function. I will present the main findings of a number of projects in lab dealing with dendritic nonlinearities in excitatory and inhibitory and their consequences on neuronal tuning and memory formation, the role of dendrites in solving nonlinear problems in human neurons and recent efforts to advance machine learning algorithms by adopting dendritic features.
A machine learning way to analyse white matter tractography streamlines / Application of artificial intelligence in correcting motion artifacts and reducing scan time in MRI
1. Embedding is all you need: A machine learning way to analyse white matter tractography streamlines - Dr Shenjun Zhong, Monash Biomedical Imaging Embedding white matter streamlines with various lengths into fixed-length latent vectors enables users to analyse them with general data mining techniques. However, finding a good embedding schema is still a challenging task as the existing methods based on spatial coordinates rely on manually engineered features, and/or labelled dataset. In this webinar, Dr Shenjun Zhong will discuss his novel deep learning model that identifies latent space and solves the problem of streamline clustering without needing labelled data. Dr Zhong is a Research Fellow and Informatics Officer at Monash Biomedical Imaging. His research interests are sequence modelling, reinforcement learning and federated learning in the general medical imaging domain. 2. Application of artificial intelligence in correcting motion artifacts and reducing scan time in MRI - Dr Kamlesh Pawar, Monash Biomedical imaging Magnetic Resonance Imaging (MRI) is a widely used imaging modality in clinics and research. Although MRI is useful it comes with an overhead of longer scan time compared to other medical imaging modalities. The longer scan times also make patients uncomfortable and even subtle movements during the scan may result in severe motion artifact in the images. In this seminar, Dr Kamlesh Pawar will discuss how artificial intelligence techniques can reduce scan time and correct motion artifacts. Dr Pawar is a Research Fellow at Monash Biomedical Imaging. His research interest includes deep learning, MR physics, MR image reconstruction and computer vision.
The time of chromatin: emerging insights from longitudinal modelling of neurodevelopmental disorders
CURE-ND Neurotechnology Workshop - Innovative models of neurodegenerative diseases
One of the major roadblocks to medical progress in the field of neurodegeneration is the absence of animal models that fully recapitulate features of the human diseases. Unprecedented opportunities to tackle this challenge are emerging e.g. from genome engineering and stem cell technologies, and there are intense efforts to develop models with a high translational value. Simultaneously, single-cell, multi-omics and optogenetics technologies now allow longitudinal, molecular and functional analysis of human disease processes in these models at high resolution. During this workshop, 12 experts will present recent progress in the field and discuss: - What are the most advanced disease models available to date? - Which aspects of the human disease do these accurately models, which ones do they fail to replicate? - How should models be validated? Against which reference, which standards? - What are currently the best methods to analyse these models? - What is the field still missing in terms of modelling, and of technologies to analyse disease models? CURE-ND stands for 'Catalysing a United Response in Europe to Neurodegenerative Diseases'. It is a new alliance between the German Center for Neurodegenerative Diseases (DZNE), the Paris Brain Institute (ICM), Mission Lucidity (ML, a partnership between imec, KU Leuven, UZ Leuven and VIB in Belgium) and the UK Dementia Research Institute (UK DRI). Together, these partners embrace a joint effort to accelerate the pace of scientific discovery and nurture breakthroughs in the field of neurodegenerative diseases. This Neurotechnology Workshop is the first in a series of joint events aiming at exchanging expertise, promoting scientific collaboration and building a strong community of neurodegeneration researchers in Europe and beyond.
Mixed active-passive suspensions: from particle entrainment to spontaneous demixing
Understanding the properties of active matter is a challenge which is currently driving a rapid growth in soft- and bio-physics. Some of the most important examples of active matter are at the microscale, and include active colloids and suspensions of microorganisms, both as a simple active fluid (single species) and as mixed suspensions of active and passive elements. In this last class of systems, recent experimental and theoretical work has started to provide a window into new phenomena including activity-induced depletion interactions, phase separation, and the possibility to extract net work from active suspensions. Here I will present our work on a paradigmatic example of mixed active-passive system, where the activity is provided by swimming microalgae. Macro- and micro-scopic experiments reveal that microorganism-colloid interactions are dominated by rare close encounters leading to large displacements through direct entrainment. Simulations and theoretical modelling show that the ensuing particle dynamics can be understood in terms of a simple jump-diffusion process, combining standard diffusion with Poisson-distributed jumps. Entrainment length can be understood within the framework of Taylor dispersion as a competition between advection by the no-slip surface of the cell body and microparticle diffusion. Building on these results, we then ask how external control of the dynamics of the active component (e.g. induced microswimmer anisotropy/inhomogeneity) can be used to alter the transport of passive cargo. As a first step in this direction, we study the behaviour of mixed active-passive systems in confinement. The resulting spatial inhomogeneity in swimmers’ distribution and orientation has a dramatic effect on the spatial distribution of passive particles, with the colloids accumulating either towards the boundaries or towards the bulk of the sample depending on the size of the container. We show that this can be used to induce the system to de-mix spontaneously.
Modelling the neural mechanisms of navigation in insects
Modelling affective biases in rodents: behavioural and computational approaches
My research focuses, broadly speaking, on how emotions impact decision making. Specifically, I am interested in affective biases, a phenomenon known to be important in depression. Using a rodent decision-making task, combined with computational modelling I have investigated how different antidepressant and pro-depressant manipulations that are known to alter mood in humans alter judgement bias, and provided insight into the decision processes that underlie these behaviours. I will also highlight how the combination of behaviour and modelling can provide a truly translation approach, enabling comparison and interpretation of the same cognitive processes between animal and human research.
Non-equilibrium molecular assembly in reshaping and cutting cells
A key challenge in modern soft matter is to identify the principles that govern the organisation and functionality in non-equilibrium systems. Current research efforts largely focus on non-equilibrium processes that occur either at the single-molecule scale (e.g. protein and DNA conformations under driving forces), or at the scale of whole tissues, organisms, and active colloidal and microscopic objects. However, the range of the scales in-between — from molecules to large-scaled molecular assemblies that consume energy and perform work — remains under-explored. This is, nevertheless, the scale that is crucial for the function of a living cell, where molecular self-assembly driven far from equilibrium produces mechanical work needed for cell reshaping, transport, motility, division, and healing. Today I will discuss physical modelling of active elastic filaments, called ESCRT-III filaments, that dynamically assemble and disassemble on cell membranes. This dynamic assembly changes the filaments’ shape and mechanical properties and leads to the remodelling and cutting of cells. I will present a range of experimental comparisons of our simulation results: from ESCRT-III-driven trafficking in eukaryotes to division of evolutionary simple archaeal cells.
Theory-driven probabilistic modeling of language use: a case study on quantifiers, logic and typicality
Theoretical linguistics postulates abstract structures that successfully explain key aspects of language. However, the precise relation between abstract theoretical ideas and empirical data from language use is not always apparent. Here, we propose to empirically test abstract semantic theories through the lens of probabilistic pragmatic modelling. We consider the historically important case of quantity words (e.g., `some', `all'). Data from a large-scale production study seem to suggest that quantity words are understood via prototypes. But based on statistical and empirical model comparison, we show that a probabilistic pragmatic model that embeds a strict truth-conditional notion of meaning explains the data just as well as a model that encodes prototypes into the meaning of quantity words.
Fundamental Cellular and Molecular Mechanisms governing Brain Development
The symposium will start with Prof Cooper who will present “From neural tube to neocortex: the role of adhesion in maintaining stem cell morphology and function”. Then, Dr Tsai will talk about “In the search for new genes involved in brain development and disorders”. Dr Del Pino will deal with the “Regulation of intrinsic network activity during area patterning in the cerebral cortex”, and Dr Wang will present “Modelling Neurodevelopmental Disorders in Flies”.
Receptor Costs Determine Retinal Design
Our group is interested in discovering design principles that govern the structure and function of neurons and neural circuits. We record from well-defined neurons, mainly in flies’ visual systems, to measure the molecular and cellular factors that determine relevant measures of performance, such as representational capacity, dynamic range and accuracy. We combine this empirical approach with modelling to see how the basic elements of neural systems (ion channels, second messengers systems, membranes, synapses, neurons, circuits and codes) combine to determine performance. We are investigating four general problems. How are circuits designed to integrate information efficiently? How do sensory adaptation and synaptic plasticity contribute to efficiency? How do the sizes of neurons and networks relate to energy consumption and representational capacity? To what extent have energy costs shaped neurons, sense organs and brain regions during evolution?
‘Optimistic’ and ‘pessimistic’ decision-making as an indicator of animal emotion and welfare
Reliable and validated measures of emotion in animals are of great import; they are crucial to better understanding and developing treatments for human mood disorders, and they are necessary for ensuring good animal welfare. We have developed a novel measure of emotion in animals that is grounded in theory and psychological research – decision-making under ambiguity. Specifically, we consider that more ‘optimistic’ decisions about ambiguous stimuli reflect more positive emotional states, while the opposite is true for more ‘pessimistic’ decisions. In this talk, we will outline the background behind and implementation of this measure, meta-analyses that have been conducted to validate the measure, and discuss how computational modelling has been used to further understand the cognitive processes underlying ‘optimistic’ and ‘pessimistic’ decision-making as an indicator of animal emotion and welfare.
Is it Autism or Alexithymia? explaining atypical socioemotional processing
Emotion processing is thought to be impaired in autism and linked to atypical visual exploration and arousal modulation to others faces and gaze, yet evidence is equivocal. We propose that, where observed, atypical socioemotional processing is due to alexithymia, a distinct but frequently co-occurring condition which affects emotional self-awareness and Interoception. In study 1 (N = 80), we tested this hypothesis by studying the spatio-temporal dynamics and entropy of eye-gaze during emotion processing tasks. Evidence from traditional and novel methods revealed that atypical eye-gaze and emotion recognition is best predicted by alexithymia in both autistic and non-autistic individuals. In Study 2 (N = 70), we assessed interoceptive and autonomic signals implicated in socioemotional processing, and found evidence for alexithymia (not autism) driven effects on gaze and arousal modulation to emotions. We also conducted two large-scale studies (N = 1300), using confirmatory factor-analytic and network modelling and found evidence that Alexithymia and Autism are distinct at both a latent level and their intercorrelations. We argue that: 1) models of socioemotional processing in autism should conceptualise difficulties as intrinsic to alexithymia, and 2) assessment of alexithymia is crucial for diagnosis and personalised interventions in autism.
Computational modelling of dentate granule cells reveals Pareto optimal trade-off between pattern separation and energy efficiency (economy)
Bernstein Conference 2024
Deep non-linear mixed effects modelling of voltage-gated potassium channels
Bernstein Conference 2024
Modelling Systems Memory Consolidation with neural fields
COSYNE 2022
Modelling Systems Memory Consolidation with neural fields
COSYNE 2022
Modelling ecological constraints on visual processing with deep reinforcement learning
COSYNE 2023
Advanced metamodelling on the o2S2PARC computational neurosciences platform facilitates stimulation selectivity and power efficiency optimization and intelligent control
FENS Forum 2024
Alpha-synuclein pathology from human-derived Lewy body inoculations in the mouse olfactory bulb: Modelling early Parkinson’s disease
FENS Forum 2024
Cellular response to oxidative stress and senescence in Fmr1 knockout mice modelling Fragile X Syndrome
FENS Forum 2024
Can dynamic causal modelling (DCM) identify multistable neural circuits for decision-making?
FENS Forum 2024
Dynamic transcellular molecular exchange: A novel view on extracellular matrix remodelling
FENS Forum 2024
Exploring the combinatorial, diagnostic utility of multimodal biomarkers in differentially diagnosing Dementia with Lewy Bodies from Alzheimer’s through predictive statistical modelling
FENS Forum 2024
Fitting, comparison and selection of different calmodulin kinetic schemes on a single data set using non-linear mixed effects modelling
FENS Forum 2024
Genetic mechanisms for impaired synaptic plasticity in schizophrenia revealed by computational modelling
FENS Forum 2024
A hierarchical Bayesian mixture approach for modelling neuronal connectivity patterns from MAPseq data
FENS Forum 2024
New insights from modelling neurons in PRRT2 patients
FENS Forum 2024
Involvement of glioblastoma-derived extracellular vesicles in promoting endothelial cell remodelling: Implications for glioblastoma tumour progression
FENS Forum 2024
Mathematical modelling of ATP-induced Ca2+ transients in Deiters cells considering the tonotopic axis
FENS Forum 2024
Modelling chemotherapy-induced peripheral neuropathy on-a-chip
FENS Forum 2024
Modelling cognitive and psychiatric behavioural traits in a mouse model of neurofibromatosis type I
FENS Forum 2024
Modelling determinants of region-specific dopamine dynamics in the striatum
FENS Forum 2024
Modelling MSA disease through the generation of brain organoids
FENS Forum 2024
Modelling Dravet syndrome using human induced pluripotent stem cell (hiPSC)-derived neural circuits
FENS Forum 2024
Modelling Koolen-de Vries syndrome in neural organoids
FENS Forum 2024
Modelling neurodevelopmental disorder risk in inborn errors of immunity
FENS Forum 2024
Modelling the radial glia scaffold in vitro to study radial migration of pyramidal neurons
FENS Forum 2024
Modelling regional specification in brain organoids using a novel mesofluidic device
FENS Forum 2024
Modelling the stress response in SH-SY5Y-derived neurons: Disentangling the mineralocorticoid and glucocorticoid receptors
FENS Forum 2024
Modelling synaptic tau and Aβ pathology in human organotypic brain slice cultures
FENS Forum 2024
Network integration of neurons with different (somatic vs. dendritic) axon origin: A computational modelling approach
FENS Forum 2024
Neuronal travelling waves explain rotational dynamics in experimental datasets and modelling
FENS Forum 2024
Preoptic circuit remodelling underlies alloparental care in juvenile mice
FENS Forum 2024
Probing differences in decision process settings across contexts and individuals through joint RT-EEG hierarchical modelling
FENS Forum 2024
Quantitative modelling of the intracellular biochemical mechanisms underlying long-term potentiation in a CA1 pyramidal cell spine head
FENS Forum 2024
In vitro modelling of immune effector cell-associated neurotoxicity syndrome (ICANS) resulting from CAR T-cell therapy treating haematological cancer
FENS Forum 2024
Joint neural-cognitive modelling of free recall: using the LPP to model emotional memory
Neuromatch 5
Moving from phenomenological to predictive modelling: Pitfalls of modeling brain stimulation in-silico
Neuromatch 5
Modelling rTMS-Induced Metaplasticity Dynamics within Macroscopic Oscillatory Brain Circuits
Neuromatch 5
Where are the neural architectures? The curse of structural flatness in neural network modelling
Neuromatch 5