Machine Learning
machine learning
Computational Mechanisms of Predictive Processing in Brains and Machines
Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.
Prof. KongFatt Wong-Lin
Postdoctoral Research Associate Position in Computational Neuroscience (Computational Modelling of Decision Making) Applications are invited for an externally funded Postdoctoral Research Associate position at the Intelligent Systems Research Centre (ISRC) in Ulster University, UK. The successful candidate will develop and apply computational modelling, and theoretical and analytical techniques to understand brain and behavioural data across primate species, and to apply biologically based neural network modelling to elucidate mechanisms underlying perceptual decision-making. The duration of the position is 24 months, from January 2024 till end of 2025. The personnel will be based at the ISRC in Ulster University, working with Prof. KongFatt Wong-Lin and his team, while collaborating closely with international collaborators in the USA and the Republic of Ireland, namely, Prof. Michael Shadlen at Columbia University (USA), Prof. Stephan Bickel at Northwell-Hofstra School of Medicine (USA), Prof. Redmond O'Connell at Trinity College Dublin (Ireland), Prof. Simon Kelly at University College Dublin (Ireland), and Prof. S. Shushruth at University of Pittsburgh (USA). The ISRC is dedicated to developing a bio-inspired computational basis for AI to power future cognitive technologies. This is achieved through understanding how the brain works at multiple levels, from cells to cognition and apply that understanding to create models and technologies that solve complex issues that face people and society. All applicants should hold a degree in in Computational Neuroscience, Computational Biology, Neuroscience, Computing, Engineering, Mathematics, Data Science, Physical Sciences, Biology, or a cognate area. Apply online: https://my.corehr.com/pls/coreportal_ulsp/erq_jobspec_version_4.display_form?p_company=1&p_internal_external=E&p_display_in_irish=N&p_applicant_no=&p_recruitment_id=023762&p_process_type=&p_form_profile_detail=&p_display_apply_ind=Y&p_refresh_search=Y Closing date for receipt of completed applications: 8th November 2023. Job Ref: 023762. For any informal enquiries regarding this position, please contact KongFatt Wong-Lin; email: k.wong-lin@ulster.ac.uk ; website: https://www.ulster.ac.uk/staff/k-wong-lin
University of Bristol
The role The School of Engineering Mathematics and Technology at the University of Bristol is seeking to appoint a Senior Lecturer / Associate Professor whose research encompasses neural computation, machine learning and AI. If you are earlier in your career the post is also available at Lecturer level. The University of Bristol is an exciting centre for research into the nature of computation and inference in humans, animals and machines. Our computational neuroscience group has made important contributions in, for example, Bayesian approaches to data and inference, biomimetic deep learning, anatomically-constrained neural networks and the theory of neural networks. The University has a long tradition of cross-disciplinary research and Computational Neuroscience is part of both the Bristol Neuroscience Network and the Intelligent Systems Group; we are recognised for our central role in the local neuroscience and machine learning/AI communities. You would be joining the University at an exciting time as we embark on a £500M investment in our new campus and while we create a home for the UK’s AI Research Resource with the UK’s most powerful supercomputer. We are committed to an inclusive and diverse environment where everyone can thrive. We welcome applicants from all backgrounds, especially those from under-represented communities. We offer flexible working arrangements to help balance professional and personal commitments. What will you be doing? You will conduct research at the interface between computational neuroscience and machine learning and contribute to the associated teaching on our degree programmes and to academic administration. You will take part in our lively research community and join our internationally renowned researchers in producing high-quality research with the potential to secure research funding.
VIB Center for Computational Biology and AI
The VIB Center for AI & Computational Biology (VIB.AI) and KU Leuven are searching for three Principal Investigators to join their faculty (Group Leader at VIB, Professor at KU Leuven). We are particularly interested in recruiting faculty members who use and develop artificial intelligence methods and mechanistic mathematical models to address fundamental questions in biology. We welcome applications across all domains of machine learning, artificial intelligence, and associated fields. Examples of research topics include but are not limited to: development of new AI architectures for biology and hybrid models that combine deep learning with mechanistic models; foundation models of genome regulation using single-cell and spatial multi-omics data; AI-based modeling of protein structure and protein interaction networks; AI-based modeling of cell morphology and tissue function using imaging and computer vision; AI models of disease and digital twin applications. Biological applications are broad: microorganisms, plant biology, biodiversity and ecology, neuroscience, cancer, and immunology. We also welcome applicants with applied projects, including synthetic biology, AI-driven experiments (experiment-in-the-loop), and bio-engineering. This position comes with full salary and core funding (internationally competitive package) that is renewable for multiple additional 5-year periods, and access to state-of-the-art research and top-notch support core facilities, as well as support to attract talented PhD students and postdocs from across the world. Assignment Research. As a VIB Group Leader and KU Leuven Professor, you will be expected to (continue to) build your research program with your own independent research group, and to set up or consolidate a strong network with other researchers within VIB, KU Leuven and beyond. You strive for excellence in your research and thereby contribute to the scientific development of the new VIB.AI center. Teaching. The candidate will be appointed at the Faculty of Medicine or the Faculty of Engineering Science, and will take up teaching assignments (in English or in Dutch) in either of these faculties. You ensure high-quality education, with a clear commitment to the quality of the program as a whole. You also contribute to the pedagogic project of the university through the supervision of MSc theses and as a promoter of PhD students. Service. You provide scientific, societal and internal services that contribute to the reputation of the entire VIB and university. Your profile -PhD or equivalent experience in machine learning or a related quantitative field (Computer Science, Artificial Intelligence, Statistics, Mathematics, Physics, Computational Biology/Chemistry). -Candidates will be considered at junior or more senior levels assuming relevant experience in foundational machine learning research and/or applied machine learning in an academic and/or industrial setting (post-doctoral). -Excellent machine learning and programming experience. -Published impactful research, demonstrates creativity, originality and addresses relevant problems in biology & computational research. -Demonstrated ability to acquire competitive funding. -Interdisciplinary mindset and keen on collaborating broadly in the center, the department and the university. -Motivated to guide postdoctoral researchers, PhD interns, and full-time scientists. -International working experience. -You have a thorough knowledge of spoken and written English. -The official administrative language used at KU Leuven is Dutch. If you do not speak Dutch (or do not speak it well) at the start of employment, KU Leuven will provide language training to enable you to take part in meetings. Before teaching courses in Dutch or English, you will be given the opportunity to learn Dutch resp. English to the required standard. We offer -Substantial core research funding that is renewable every 5 years. -An attractive employment package including 100% of your salary and excellent health benefits. -Access to a vibrant academic environment that encourages collaboration with top experts in biology, computational biology, engineering and computer science both at VIB and KUL. -Access to a highly talented pool of students in biology, computational biology, engineering and computer science from the KUL bachelor and master programs. -New open-design state-of-the-art research space. -Access to computing cluster infrastructure at the VIB Data Core and Flemish Supercomputer Center. -Access to excellent and staffed core facilities at the Center, at VIB, and at KU Leuven (including sequencing, proteomics, single-cell, microscopy, data core, and many more). -Possibility to also perform wet lab activities in state-of-the-art infrastructure. -Broad administrative support, including help recruiting technicians, PhD students and post-doc scientists for your group. -Access to dedicated business development team specialized in technology transfer and valorization. -Professional leadership training. -An internationally recognized workplace that values diversity, promotes an inclusive environment. -Help with relocation and establishing a life in Belgium, including visa application (if necessary), and finding housing, schooling and daycare. -The successful candidate should also be selected for Professor at KU Leuven. About the VIB Center for AI and Computational Biology VIB (Flanders Institute for Biotechnology) is an entrepreneurial non-profit research institute, with a clear focus on ground-breaking strategic basic research in life sciences and operates in close partnership with the five universities in Flanders. VIB strives for a respectful and supportive working environment and a culture of belonging for diverse talents in the organization. VIB.AI was established in 2023 as the 10th VIB center, with the core mission to study fundamental problems in biology by combining machine learning with in-depth knowledge of biological processes. We aim to work towards foundation models and integrative theories of biological systems, and towards innovative AI-driven biotech applications in synthetic biology, agro-tech, and personalized medicine. AI-driven research at VIB.AI starts from biological questions and challenges that are addressed using state-of-the-art and novel computational and AI strategies, through close interactions and iterations with biological experiments and research labs within VIB.AI and across VIB. Additionally, we are committed to fostering computational and AI research excellence across all VIB Centers, amplifying collaboration (12 co-associated GLs) and pushing the boundaries of innovation in all of VIB’s research domains (plant biology, cancer biology, structural biology, medical biotechnology, neuroscience, immunology and microbiology). KU Leuven This position is linked to a professor position at KU Leuven. Based on the profile and research topic, this position is linked to a KU Leuven Department, namely DME, DCMM or ESAT/CS, with one position available per Department(s). Department of Human Genetics (DME) The DME Department of Human Genetics is a leading European center for Human Genetics. Its primary objectives revolve around achieving excellence in research, education, and translational applications. These efforts aim to enhance genetic diagnosis, counselling, therapy, and preventive measures. The department's research portfolio encompasses both fundamental and clinical research conducted across various domains, including cultured cells, animal models, and human subjects. The research focus of DME is genome structure, function, and development using state of the art genomics and bioinformatics methodologies and deploying them for the diagnosis and treatment of genetic disorders. Department of Cellular and Molecular Medicine (DCMM) The research focus of DCMM is the exploration of molecular mechanisms of disease: basic cellular and molecular processes and their (path)physiological effects, and their implications in various human diseases. DCMM combines expertise in techniques of biochemistry, electrophysiology, molecular biology, cell imaging, proteomics, bio-informatics and animal-model development to acquire novel insights into cellular signalling and communication processes. Major areas of research include signalling by ions, lipids and protein phosphorylation, chromatin structure and function, protein structure, (mis)folding and transport and cell metabolism, death, autophagy and differentiation. Department of Electrical Engineering (ESAT) ESAT works on several technological innovations in the fields of energy, integrated circuits, information processing, image & speech processing, and telecommunication systems. ESAT has seven distinct groups: Computer Security and Industrial Cryptography (COSIC), Electrical Energy Systems and Applications (ELECTRA), Electronic Circuits and Systems (MICAS), Micro- and Nanosystems (MNS), Processing Speech and Images and Center for Dynamical Systems (PSI), Signal Processing and Data Analytics (STADIUS) and Waves: Core Research and Engineering (WAVECORE). Department of Computer Science (CS) The Department of Computer Science is globally recognized for its exceptional research and academic programs in fields such as informatics, computer science, artificial intelligence, mathematical engineering, digital humanities, and teacher education. The CS department is comprised of five distinct units: Distributed and Secure Software (DistriNet), Declarative Languages and Artificial Intelligence (DTAI), Human-Computer Interaction (HCI), Numerical Analysis and Applied Mathematics (NUMA) and Computer Science. The Leuven Life Sciences eco-system and city of Leuven Leuven and the surrounding area of Northern Belgium (Flanders) represent one of the top research destinations in Europe. VIB.AI is part of the VIB Life Sciences Institute (with colleagues in cancer, biotechnology, immunology, plants, microbiology amongst others) and an extensive set of core facilities. KUL is the largest university in Belgium and Europe’s most innovative university and is home to the Leuven AI Institute and the Leuven University Hospital, one of the largest in Europe. Leuven is also home to Imec, a world-renowned research center for nanoelectronics and digital technology. Leuven is an attractive European university city with a rich history and a lively atmosphere. The area has a strong biotechnology sector with a wide variety of spin-offs and start-ups. This, together with the presence of the University of Leuven and the University Hospital, make Leuven particularly internationally-oriented and tech-minded, and a natural home for researchers and their families; the city was even awarded the title of European capital of innovation 2020 by the European Commission. English is very widely spoken in the city and surrounding area. Leuven offers an affordable, high standard of living, has an international school, and ample daycare options. The public education system and public health care system in Flanders are world-class, easily accessible, and low-cost to end users. Public transport is excellent and widely available. Brussels, the capital of Europe, is only 20 mins away. Leuven is also only 14 minutes by train from Brussels Airport which has many daily direct flights to North America, Africa and Asia. There are also high-speed direct international rail connections to numerous cities including Paris, London, Frankfurt and Amsterdam. Start date: 2024 How to apply? Please use the VIB HR application tool and upload -a cover/motivation letter -your full CV with publication list -a 2-page biosketch including your top 5 publications or achievements -contact details of 3 referees -a 2-4 page statement of your research plan including a brief statement reflecting your vision for the new VIB.AI center Application deadline: 31st January 2024 For more information Contact Stein Aerts (stein.aerts@vib.be) director VIB.AI
University of Chicago - Grossman Center for Quantitative Biology and Human Behavior
The Grossman Center for Quantitative Biology and Human Behavior at the University of Chicago seeks outstanding applicants for multiple postdoctoral positions in computational and theoretical neuroscience.
Gatsby Computational Neuroscience Unit
The Gatsby Computational Neuroscience Unit welcomes applications for its PhD Programme in Theoretical Neuroscience and Machine Learning (September 2024 entry). Students complete a 4-year PhD in either machine learning or theoretical neuroscience, with minor emphasis in the complementary field. Courses in the first year provide a comprehensive introduction to both fields and systems neuroscience, with multidisciplinary training in other areas of neuroscience also available. Students are encouraged to work and interact closely with researchers at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour as well as the Centre for Computational Statistics and Machine Learning to take full advantage of the multidisciplinary research environment. PhD research topics can focus on (but not limited to): Graphical models, Kernel methods, Bayesian statistics, Reinforcement learning, Network and relational data, Neural data analysis, Neural representations, Computation and dynamics, Learning, Neural systems. Full funding is available regardless of nationality and current residence.
Neuro-Electronics Research Flanders (NERF)
Want to do a PhD in neurosciences? Join us - we're recruiting! Neuro-Electronics Research Flanders (NERF) is an interdisciplinary research center located in Leuven, Belgium. We study neuronal circuits and develop new technologies to link circuit activity to brain function. We offer students an opportunity to carry out cutting-edge systems neuroscience research in an international and collaborative environment, with multi-disciplinary training and access to advanced neurotechnologies. More details about the positions and NERF can be found at https://nerfphdcall.sites.vib.be/en
Prof Nathan Lepora and Dr Efi Psomopoulou
Project information We seek a talented and motivated new PhD student to join our team at the University of Bristol. The successful applicant will join the largest centre for multidisciplinary robotics research in the UK. This project will apply cutting-edge robotics and machine learning approaches to advance the physical intelligence of robots toward human-like bimanual object manipulation. The studentship will be based in Bristol Robotics Laboratory and the University of Bristol, co-supervised by Prof. Nathan Lepora (https://lepora.com/) and Dr Efi Psomopoulou. The role is part of a €7M Horizon Europe-funded project on pushing the limit of physical intelligence and performance of robots, particularly in bimanual manipulation. The student will have the benefit of being part of a large international collaboration of many leading European research groups in robotic manipulation, and a vibrant team of postdoctoral and PhD researchers in the Dexterous Robotics group (https://www.bristolroboticslab.com/dexterous-robotics). The project introduces a novel technological framework for enabling robots to perform complex object manipulation tasks, allowing them to efficiently manipulate highly diverse objects with various properties in terms of shape, size and physical characteristics similarly to how humans do. We particularly focus on bimanual manipulation robots that can operate in challenging, real-world, possibly human-populated environments, and we further research, develop and fuse the necessary technologies in robot perception, cognition, mechatronics and control, to allow such, human-like, efficient robotic objects manipulation towards step changes in contemporary service robots. The position will be based at the Bristol Robotics Laboratory (https://www.bristolroboticslab.com/) the largest centre for multidisciplinary robotics research in the UK. It will operate within the internationally-leading Dexterous Robotics Group at Bristol Robotics Laboratory, which is an exciting and vibrant research group with several recent lecturer appointment, 25 researchers and a range of state-of-the-art robot equipment. You will use dedicated facilities and expertise from the Robotics Lab in addition to those of the Faculties of Engineering and Science at the University of Bristol and project partners. You will be working in a team with the two supervisors, a postdoctoral Research Associate and a Research Technician (both to be advertised soon). We are intending to recruit other PhD students to the team to provide a cohesive and supportive environment for the members. Start date The post starts on November 1st 2023 and lasts for 3.5 years. Application process Please contact Dr Efi Psomopoulou (efi.psomopoulou[at]bristol.ac.uk) and Professor Nathan Lepora (n.lepora[at]bristol.ac.uk) for more details about this post and to apply. Applicants should send a short CV (2 pages max) and 1-page personal statement describing why they are interested in the post. Personal statement: Please also provide a personal statement that describes your training and experience so far, your motivation for doing a PhD, your motivations for applying to the University of Bristol, and why you think we should select you. We are keen to support applicants from minority and under-represented backgrounds (based on protected characteristics) and those who have experienced other challenges or disadvantages. We encourage you to use your personal statement to ensure we can take these factors into account. We will keep applications open until the post is filled. Funding information This is a fully-funded PhD studentship at standard UKRI rates (currently £18,022 for 2023/24 year). Home fees for UK and Irish residents will be covered. There will be additional funds from the project grant for equipment and travel that are substantially in excess of those usually available for PhD studies. NOTE: This scholarship covers tuition fees for UK and Irish citizens and EU applicants who have been resident in the UK for at least 3 years (some constraints are in place around residence for education) and have UK settlement or pre-settlement status under the EU Settlement Scheme. International students can apply but would need to cover the difference between home and overseas fees.
Prof Nathan Lepora and Dr Efi Psomopoulou
Project information We seek a talented and motivated new PhD student to join our team at the University of Bristol. The successful applicant will join the largest centre for multidisciplinary robotics research in the UK. This project will apply cutting-edge robotics and machine learning approaches to advance the physical intelligence of robots toward human-like bimanual object manipulation. The studentship will be based in Bristol Robotics Laboratory (https://www.bristolroboticslab.com/dexterous-robotics) and the University of Bristol, co-supervised by Prof. Nathan Lepora (https://lepora.com/) and Dr Efi Psomopoulou. The role is part of a €7M Horizon Europe-funded project on pushing the limit of physical intelligence and performance of robots, particularly in bimanual manipulation. The student will have the benefit of being part of a large international collaboration of many leading European research groups in robotic manipulation, and a vibrant team of postdoctoral and PhD researchers in the Dexterous Robotics group. The project introduces a novel technological framework for enabling robots to perform complex object manipulation tasks, allowing them to efficiently manipulate highly diverse objects with various properties in terms of shape, size and physical characteristics similarly to how humans do. We particularly focus on bimanual manipulation robots that can operate in challenging, real-world, possibly human-populated environments, and we further research, develop and fuse the necessary technologies in robot perception, cognition, mechatronics and control, to allow such, human-like, efficient robotic objects manipulation towards step changes in contemporary service robots. The position will be based at the Bristol Robotics Laboratory the largest centre for multidisciplinary robotics research in the UK. It will operate within the internationally-leading Dexterous Robotics Group at Bristol Robotics Laboratory, which is an exciting and vibrant research group with several recent lecturer appointment, 25 researchers and a range of state-of-the-art robot equipment. You will use dedicated facilities and expertise from the Robotics Lab in addition to those of the Faculties of Engineering and Science at the University of Bristol and project partners. You will be working in a team with the two supervisors, a postdoctoral Research Associate and a Research Technician (both to be advertised soon). We are intending to recruit other PhD students to the team to provide a cohesive and supportive environment for the members. Start date The post starts on November 1st 2023 and lasts for 3.5 years. Application process Please contact Dr Efi Psomopoulou (efi.psomopoulou[at]bristol.ac.uk) and Professor Nathan Lepora (n.lepora[at]bristol.ac.uk) for more details about this post and to apply. Applicants should send a short CV (2 pages max) and 1-page personal statement describing why they are interested in the post. Personal statement: Please also provide a personal statement that describes your training and experience so far, your motivation for doing a PhD, your motivations for applying to the University of Bristol, and why you think we should select you. We are keen to support applicants from minority and under-represented backgrounds (based on protected characteristics) and those who have experienced other challenges or disadvantages. We encourage you to use your personal statement to ensure we can take these factors into account. We will keep applications open until the post is filled. Funding information This is a fully-funded PhD studentship at standard UKRI rates (currently £18,022 for 2023/24 year). Home fees for UK and Irish residents will be covered. There will be additional funds from the project grant for equipment and travel that are substantially in excess of those usually available for PhD studies. NOTE: This scholarship covers tuition fees for UK and Irish citizens and EU applicants who have been resident in the UK for at least 3 years (some constraints are in place around residence for education) and have UK settlement or pre-settlement status under the EU Settlement Scheme. International students can apply but would need to cover the difference between home and overseas fees.
Prof Mario Dipoppa
We are looking for candidates, who are eager to solve fundamental questions with a creative mindset. Candidates should have a strong publication track record in Computational Neuroscience or a related quantitative field, including but not limited to Computer Science, Machine Learning, Engineering, Bioinformatics, Physics, Mathematics, and Statistics. Candidates holding a Ph.D. degree interested in joining the laboratory as postdoctoral researchers should submit a CV including a publication list, a copy of a first-authored publication, a research statement describing past research and career goals (max. two pages), and contact information for two academic referees. The selected candidates will be working on questions addressing how brain computations emerge from the dynamics of the underlying neural circuits and how the neural code is shaped by computational needs and biological constraints of the brain. To tackle these questions, we employ a multidisciplinary approach that combines state-of-the-art modeling techniques and theoretical frameworks, which include but are not limited to data-driven circuit models, biologically realistic deep learning models, abstract neural network models, machine learning methods, and analysis of the neural code. Our research team, the Theoretical and Computational Neuroscience Laboratory, is on the main UCLA campus and enjoys close collaborations with the world-class neuroscience community there. The lab, led by Mario Dipoppa, is a cooperative and vibrant environment where all members are offered excellent scientific training and career mentoring. We strongly encourage candidates to apply early as applications will be reviewed until the positions are filled. The positions are available immediately with a flexible starting date. Please submit the application material as a single PDF file with your full name in the file name to mdipoppa@g.ucla.edu. Informal inquiries are welcome. For more details visit www.dipoppalab.com.
Prof Mario Dipoppa
We are looking for candidates with a keen interest in gaining research experience in Computational Neuroscience, pursuing their own projects, and supporting those of other team members. Candidates should have a bachelor's or master's degree in a quantitative discipline and strong programming skills, ideally in Python. Candidates interested in joining the laboratory as research associates should send a CV, a research statement describing past research and career goals (max. one page), and contact information for two academic referees. The selected candidates will be working on questions addressing how brain computations emerge from the dynamics of the underlying neural circuits and how the neural code is shaped by computational needs and biological constraints of the brain. To tackle these questions, we employ a multidisciplinary approach that combines state-of-the-art modeling techniques and theoretical frameworks, which include but are not limited to data-driven circuit models, biologically realistic deep learning models, abstract neural network models, machine learning methods, and analysis of the neural code. Our research team, the Theoretical and Computational Neuroscience Laboratory, is on the main UCLA campus and enjoys close collaborations with the world-class neuroscience community there. The lab, led by Mario Dipoppa, is a cooperative and vibrant environment where all members are offered excellent scientific training and career mentoring. We strongly encourage candidates to apply early as applications will be reviewed until the positions are filled. The positions are available immediately with a flexible starting date. Please submit the application material as a single PDF file with your full name in the file name to mdipoppa@g.ucla.edu. Informal inquiries are welcome. For more details visit www.dipoppalab.com.
Lancaster University Leipzig
Assistant Professor (Lecturer) in Computer Science, Data Science Lancaster University Leipzig Salary: €60,722 to €68,051 Closing Date: Monday 31 July 2023 Interview Date: Wednesday 16 August 2023 Reference: 0721-23 Lancaster University Leipzig, Germany Lancaster University invites applications for one post of Assistant Professor (Lecturer) in Computer Science to join at its exciting new campus in Leipzig, Germany. Located in one of Germany’s most vibrant, livable, and attractive cities, the Leipzig campus offers the same high academic quality and fully rounded student experience as in the UK, with a strong strategic vision of excellence in teaching, research, and engagement. The position is to support the upcoming MSc programme in Data Science, and to complement the department’s current research strengths on Intelligent Systems and Artificial Intelligence. The candidate is expected to have solid research foundations and a strong commitment in teaching Data Science topics such as Data Science Fundamentals, Data Mining, and Intelligent Data Analysis and Visualisation. The ideal candidate should have a completed PhD degree and demonstrated capabilities in teaching, research, and engagement in the areas of Data Science. Candidates should be able to deliver excellent teaching at graduate and undergraduate level, pursue their own independent research, and develop publications in high quality academic journals or conferences. Candidates are expected to have a suitable research track record of targeting high quality journals or a record of equivalent high-quality research outputs. Colleagues joining LU Leipzig’s computer science department will benefit from a very active research team in Leipzig with a focus on Intelligent Systems and Artificial Intelligence in the wider sense, but will also have access to the research environment at the School of Computing and Communications in the UK. We offer a collegial and multidisciplinary environment with enormous potential for collaboration and work on challenging real-world problems especially. German language skills are not a prerequisite for the role, though we are seeking applicants with an interest in making a long-term commitment to Lancaster University in Leipzig. Please note this role is a full time, indefinite-duration appointment based in Leipzig, Germany. The contract is with Lancaster University Leipzig under German law. Ideally we would like the appointed candidate to start at latest on 1st January 2024. The opportunity is unique as the role is permanent after a 6-months probationary period, and offers the opportunity of being promoted to associate professor and professor in line with performance. We also support administrative procedures related to settlement in Germany and assistance in finding accommodation, family integration (school registration for children), German training (for you and your partner), and relocation expenses.
Prof. Edmund Wascher
The core aspect of the position is the analysis of complex neurocognitive data (mainly EEG). The data comes from different settings and will also be analyzed using machine learning/AI. Scientific collaboration in a broad-based logitudinal study investigating the prerequisites for healthy lifelong working is wanted. The study is in its second wave of data collection. And represents a worldwide unique dataset. In addition to detailed standard psychological and physiological data, neurocognitive EEG data from more than 10 experimental settings and the entire genome of the subjects are available. For the analysis, machine learning will be used in addition to state-of-the-art EEG analysis methods. The position will be embedded in a group on computational neuroscience. Please find the full ad here: https://www.ifado.de/ifadoen/careers/current-job-offers/?noredirect=en_US#job1
Rava Azeredo da Silveira
Research questions will be chosen from a range of topics in theoretical/computational neuroscience and cognitive science, involving either data analysis or theory, and drawing on recent machine learning approaches.
Rava Azeredo da Silveira
Research questions will be chosen from a range of topics in theoretical/computational neuroscience and cognitive science, involving either data analysis or theory, and drawing on recent machine learning approaches.
Ing. Mgr. Jaroslav Hlinka, Ph.D.
Research Fellow / Postdoc positions in Complex Networks and Brain Dynamics We are looking for new team members to join the Complex Networks and Brain Dynamics group to work on its interdisciplinary projects. The group is part of the Department of Complex Systems, Institute of Computer Science of the Czech Academy of Sciences - based in Prague, Czech Republic, https://www.cs.cas.cz/. We focus on the development and application of methods of analysis and modelling of real-world complex networked systems, with particular interest in the structure and dynamics of human brain function. Our main research areas are neuroimaging data analysis (fMRI & EEG, iEEG, anatomical and diffusion MRI), brain dynamics modelling, causality and information flow inference, nonlinearity and nonstationarity, graph theory, machine learning and multivariate statistics; with applications in neuroscience, climate research, economics and general communication networks. More information about the group at http://cobra.cs.cas.cz/. Conditions: • Contract is for 6-24 months duration. • Positions are available immediately or upon agreement. • Applications will be reviewed on a rolling basis with a first cut-off point on 30. 09. 2022, until the positions are filled. • This is a full-time fixed term contract appointment. Part time contract negotiable. • Monthly gross salary: 42 000 – 55 000 CZK based on qualifications and experience. Cost Of Living Comparison • Bonuses and travel funding for conferences and research stays depending on performance. • No teaching duties.
Ing. Mgr. Jaroslav Hlinka, Ph.D.
A Postdoc or Junior Scientist position is available to join the Complex Networks and Brain Dynamics group for the project: “Predicting functional outcome in schizophrenia from multimodal neuroimaging and clinical data“ funded by the Czech Health Research Council. The project involves the development of tools to predict the functional outcome of schizophrenia from multimodal neuroimaging, clinical and cognitive measurements taken early after disease onset. To overcome limitations due to high dimensionality of data, we combine robust machine-learning tools, data-driven feature selection and theory-based brain network priors. The project is carried out in collaboration with the National Institute of Mental Health, using its unique large rich imaging, cognitive and biochemical data of early stage schizophrenia patients. Conditions: • Contract is of 12-30 months duration (with possibility of a follow-up tenure-track application). • Starting date: position is available immediately. • Applications will be reviewed on a rolling basis with a first cut-off point on 30. 9. 2022 • This is a full-time fixed term contract appointment. Part time contract negotiable. • Monthly gross salary: 42 000 – 48 000 CZK based on qualifications and experience. Cost Of Living Comparison • Bonuses depending on performance and travel funding for conferences and research stays. • Contribution for reallocation costs for succesful applicant coming from abroad: 10 000 CZK plus 10 000 CZK for family (spouse and/or children). • No teaching duties.
Prof. Gustau Camps-Valls
5 PhD and postdoc positions on AI for Earth sciences - University of Valencia Dear Colleagues, We have 5 open PhD and postdoc positions in two exciting projects: 1. "Causal4Africa: Causal inference to understand food security" in collaboration with the University of Reading and Microsoft Research 2. "AI4CS: AI for Complex Systems" in collaboration with many national and international institutes, and with tight connections with our ERC Synergy Grant USMILE on "Understanding and Modeling the Earth System with Machine Learning" * Details about the positions and the application form are here: http://isp.uv.es/openings * Applications will be evaluated as soon as they are received, and the positions will remain open until filled. * Full consideration will be given to applications that are received before October 15, 2022 * Who should apply? only if you are knowledgeable in machine learning, deep learning & causal inference, and strongly interested in Earth, climate and social sciences. Please feel free to share with any potential candidates! Best regards, Gustau -- ---------------------------------------------------------- Prof. Gustau Camps-Valls, IEEE Fellow, ELLIS Fellow Image Processing Laboratory (IPL) - Building E4 - Floor 4 Universitat de València C/ Catedrático José Beltrán, 9 46980 Paterna (València). Spain
Tansu Celikel
The School of Psychology (psychology.gatech.edu/) at the GEORGIA INSTITUTE OF TECHNOLOGY (www.gatech.edu/) invites nominations and applications for 5 open-rank tenure-track faculty positionswith an anticipated start date of August 2023 or later. The successful applicant will be expected to demonstrate and develop an exceptional research program. The research area is open, but we are particularly interested in candidates whose scholarship complements existing School strengths in Adult Development and Aging, Cognition and Brain Science, Engineering Psychology, Work and Organizational Psychology, and Quantitative Psychology, and takes advantage of quantitative, mathematical, and/or computational methods. The School of Psychology is well-positioned in the College of Sciences at Georgia Tech, a University that promotes translational research from the laboratory and field to real-world applications in a variety of areas. The School offers multidisciplinary educational programs, graduate training, and research opportunities in the study of mind, brain, and behavior and the associated development of technologies that can improve human experience. Excellent research facilities support the School’s research and interdisciplinary graduate programs across the Institute. Georgia Tech’s commitment to interdisciplinary collaboration has fostered fruitful interactions between psychology faculty and faculty in the sciences, computing, business, engineering, design, and liberal arts. Located in the heart of Atlanta, one of the nation's most academic, entrepreneurial, creative and diverse cities with excellent quality of life, the School actively develops and maintains a rich network of academic and applied behavioral science/industrial partnerships in and beyond Atlanta. Candidates whose research programs foster collaborative interactions with other members of the School and further contribute to bridge-building with other academic and research units at Tech and industries are particularly encouraged to apply. Applications can be submitted online (bit.ly/Join-us-at-GT-Psych) and should include a Cover Letter, Curriculum Vitae (including a list of publications), Research Statement, Teaching Statement, DEI (diversity, equity, and inclusion) statement, and contact information of at least three individuals who have agreed to provide a reference in support of the application if asked. Evaluation of applications will begin October 10th, 2022 and continue until all positions are filled. Questions about this search can be addressed to faculty_search@psych.gatech.edu. Portal questions will be answered by Tikica Platt, the School’s HR director, and questions about positions by the co-chairs of the search committee, Ruth Kanfer and Tansu Celikel.
Dr. Tatsuo Okubo
We are a new group at the Chinese Institute for Brain Research (CIBR), Beijing, which focuses on using modern data science and machine learning tools on neuroscience data. We collaborate with various labs within CIBR to develop models and analysis pipelines to accelerate neuroscience research. We are looking for enthusiastic and talented machine learning engineers and data scientists to join this effort. Example projects include (but not limited to) extracting hidden states from population neural activity, automating behavioral classification from videos, and segmenting neurons from confocal images using deep learning.
Dr Flavia Mancini
This is an opportunity for a highly creative and skilled pre-doctoral Research Assistant to join the dynamic and multidisciplinary research environment of the Computational and Biological Learning research group (https://www.cbl-cambridge.org/), Department of Engineering, University of Cambridge. We are looking for a Research Assistant to work on projects related to statistical learning and contextual inference in the human brain. We have a particular focus of learning of aversive states, as this has a strong clinical significance for chronic pain and mental health disorders. The RA will be supervised by Dr Flavia Mancini (MRC Career Development fellow, and Head of the Nox Lab www.noxlab.org), and is expected to collaborate with theoretical and experimental colleagues in Cambridge, Oxford and abroad. The post holder will be located in central Cambridge, Cambridgeshire, UK. As a general approach, we combine statistical learning tasks in humans, computational modelling (using Bayesian inference, reinforcement learning, deep learning and neural networks) with neuroimaging methods (including 7T fMRI). The successful candidate will strengthen this approach and be responsible for designing experiments, collecting and analysis behavioural and brain fMRI data using computational modelling techniques. The key responsibilities and duties are: Ideating and conducting research studies on statistical/aversive learning, combining behavioural tasks, computational modelling (using Bayesian inference, reinforcement learning, deep learning and/or neural networks) with fMRI in healthy volunteers and chronic pain patients. Disseminating research findings Maintaining and developing technical skills to expand their scientific potential ******* More info and to apply: https://www.jobs.cam.ac.uk/job/35905/
Ing. Mgr. Jaroslav Hlinka, Ph.D.
We are looking for new team members to join the Complex Networks and Brain Dynamics group to work on its interdisciplinary projects. The group is part of the Department of Complex Systems, Institute of Computer Science of the Czech Academy of Sciences - based in Prague, Czech Republic, https://www.cs.cas.cz/en. The position details are at https://www.cs.cas.cz/job-offer/research-fellow-postdoc-position-Hlinka1-2022/en. We focus on the development and application of methods of analysis and modelling of real-world complex networked systems, with particular interest in the structure and dynamics of human brain function. Our main research areas are neuroimaging data analysis (fMRI & EEG, iEEG, anatomical and diffusion MRI), brain dynamics modelling, causality and information flow inference, nonlinearity and nonstationarity, graph theory, machine learning and multivariate statistics; with applications in neuroscience, climate research, economics and general communication networks. More information about the group at http://cobra.cs.cas.cz/. Conditions: • Contract is for 6-24 months duration. • Positions are available immediately or upon agreement. • Applications will be reviewed on a rolling basis with a first cut-off point on 30. 06. 2022, until the positions are filled. • This is a full-time fixed term contract appointment. Part time contract negotiable. • Monthly gross salary: 39000 - 55000 CZK based on qualifications and experience. Cost Of Living Comparison: https://www.numbeo.com/cost-of-living/comparison.jsp • Bonuses and travel funding for conferences and research stays depending on performance. • No teaching duties
Ing. Mgr. Jaroslav Hlinka, Ph.D.
A postdoc or junior scientist position is available to join the Complex Networks and Brain Dynamics group for the project: “Predicting functional outcome in schizophrenia from multimodal neuroimaging and clinical data“ funded by the Czech Health Research Council. For more info see: https://www.cs.cas.cz/job-offer/postdoctoral-junior-scientist-position-Hlinka2-2022/en The project involves the development of tools to predict the functional outcome of schizophrenia from multimodal neuroimaging, clinical and cognitive measurements taken early after disease onset. To overcome limitations due to high dimensionality of data, we combine robust machine-learning tools, data-driven feature selection and theory-based brain network priors. The project is carried out in collaboration with the National Institute of Mental Health, using its unique large rich imaging, cognitive and biochemical data of early stage schizophrenia patients. Conditions: • Contract is of 12-30 months duration (with possibility of a follow-up tenure-track application). • Starting date: position is available immediately • Applications will be reviewed on a rolling basis with a first cut-off point on 30.6.2022 • This is a full-time fixed term contract appointment. Part time contract negotiable. • Monthly gross salary: 40000 - 47000 CZK based on qualifications and experience. • Bonuses depending on performance and travel funding for conferences and research stays. • Contribution for reallocation costs for succesful applicant coming from abroad: 10 000 CZK plus 10 000 CZK for family (spouse and/or children). • No teaching duties
Prof. Dr. rer. nat. Kerstin Ritter
At Charité - Universitätsmedizin Berlin and the Bernstein Center for Computational Neuroscience, we are looking for a motivated and highly qualified PostDoc for methods development at the intersection of explainable machine learning / deep learning and clinical neuroimaging / translational psychiatry. The position will be located in the research groups of Ass. Prof. Kerstin Ritter and Prof. John-Dylan Haynes at Charité Berlin. The main task will be to predict response to cognitive-behavioral psychotherapy in retrospective data and a prospective cohort of patients with internalizing disorders including depression and anxiety from a complex, multimodal data set comprising tabular data as well as imaging data (e.g., clinical data, smartphone data, EEG, structural and functional MRI data). An additional task will be to contribute to the organization and maintenance of the prospective cohort. This study will be one of several projects in the newly established Research Unit 5187 "Precision Psychotherapy" (headed by Prof. Ulrike Lüken).
Ruben Coen-Cagli
The Laboratory for Computational Neuroscience (Coen-Cagli lab) invites applications for a postdoctoral position at Albert Einstein College of Medicine (Einstein) in the Bronx, New York City. The position is available immediately, it is funded for two years through a NIH training grant to the Rose F. Kennedy IDDRC at Einstein, and targets eligible candidates interested in careers in the biomedical sciences focused on the neurobiological underpinnings of neurodevelopmental disorders associated with intellectual disability and autism. The candidate will have the opportunity learn and apply an integrated approach that leverages innovative experiments and computational modeling of perceptual grouping and segmentation developed by the Coen-Cagli lab, to test theories of sensory processing in autism, in collaboration with the Cognitive Neurophysiology Laboratory (Molholm lab) at Einstein.
Dr. Fleur Zeldenrust
We are looking for a postdoctoral researcher to study the effects of neuromodulators in biologically realistic networks and learning tasks in the Vidi project 'Top-down neuromodulation and bottom-up network computation, a computational study'. You will use cellular and behavioural data gathered by our department over the previous five years on dopamine, acetylcholine and serotonin in mouse barrel cortex, to bridge the gap between single cell, network and behavioural effects. The aim of this project is to explain the effects of neuromodulation on task performance in biologically realistic spiking recurrent neural networks (SRNNs). You will use biologically realistic learning frameworks, such as force learning, to study how network structure influences task performance. You will use existing open source data to train a SRNN on a pole detection task (for rodents using their whiskers) and incorporate realistic network properties of the (barrel) cortex based on our lab's measurements. Next, you will incorporate the cellular effects of dopamine, acetylcholine and serotonin that we have measured into the network, and investigate their effects on task performance. In particular, you will research the effects of biologically realistic network properties (balance between excitation and inhibition and the resulting chaotic activity, non-linear neuronal input-output relations, patterns in connectivity, Dale's law) and incorporate known neuron and network effects. You will build on the single cell data, network models and analysis methods available in our group, and your results will be incorporated into our group's further research to develop and validate efficient coding models of (somatosensory) perception. We are therefore looking for a team player who can collaborate well with the other group members, and is willing to both learn from them and share their knowledge.
AutoMIND: Deep inverse models for revealing neural circuit invariances
Digital Traces of Human Behaviour: From Political Mobilisation to Conspiracy Narratives
Digital platforms generate unprecedented traces of human behaviour, offering new methodological approaches to understanding collective action, polarisation, and social dynamics. Through analysis of millions of digital traces across multiple studies, we demonstrate how online behaviours predict offline action: Brexit-related tribal discourse responds to real-world events, machine learning models achieve 80% accuracy in predicting real-world protest attendance from digital signals, and social validation through "likes" emerges as a key driver of mobilization. Extending this approach to conspiracy narratives reveals how digital traces illuminate psychological mechanisms of belief and community formation. Longitudinal analysis of YouTube conspiracy content demonstrates how narratives systematically address existential, epistemic, and social needs, while examination of alt-tech platforms shows how emotions of anger, contempt, and disgust correlate with violence-legitimating discourse, with significant differences between narratives associated with offline violence versus peaceful communities. This work establishes digital traces as both methodological innovation and theoretical lens, demonstrating that computational social science can illuminate fundamental questions about polarisation, mobilisation, and collective behaviour across contexts from electoral politics to conspiracy communities.
Open SPM: A Modular Framework for Scanning Probe Microscopy
OpenSPM aims to democratize innovation in the field of scanning probe microscopy (SPM), which is currently dominated by a few proprietary, closed systems that limit user-driven development. Our platform includes a high-speed OpenAFM head and base optimized for small cantilevers, an OpenAFM controller, a high-voltage amplifier, and interfaces compatible with several commercial AFM systems such as the Bruker Multimode, Nanosurf DriveAFM, Witec Alpha SNOM, Zeiss FIB-SEM XB550, and Nenovision Litescope. We have created a fully documented and community-driven OpenSPM platform, with training resources and sourcing information, which has already enabled the construction of more than 15 systems outside our lab. The controller is integrated with open-source tools like Gwyddion, HDF5, and Pycroscopy. We have also engaged external companies, two of which are integrating our controller into their products or interfaces. We see growing interest in applying parts of the OpenSPM platform to related techniques such as correlated microscopy, nanoindentation, and scanning electron/confocal microscopy. To support this, we are developing more generic and modular software, alongside a structured development workflow. A key feature of the OpenSPM system is its Python-based API, which makes the platform fully scriptable and ideal for AI and machine learning applications. This enables, for instance, automatic control and optimization of PID parameters, setpoints, and experiment workflows. With a growing contributor base and industry involvement, OpenSPM is well positioned to become a global, open platform for next-generation SPM innovation.
Learning Representations of Complex Meaning in the Human Brain
Localisation of Seizure Onset Zone in Epilepsy Using Time Series Analysis of Intracranial Data
There are over 30 million people with drug-resistant epilepsy worldwide. When neuroimaging and non-invasive neural recordings fail to localise seizure onset zones (SOZ), intracranial recordings become the best chance for localisation and seizure-freedom in those patients. However, intracranial neural activities remain hard to visually discriminate across recording channels, which limits the success of intracranial visual investigations. In this presentation, I present methods which quantify intracranial neural time series and combine them with explainable machine learning algorithms to localise the SOZ in the epileptic brain. I present the potentials and limitations of our methods in the localisation of SOZ in epilepsy providing insights for future research in this area.
On finding what you’re (not) looking for: prospects and challenges for AI-driven discovery
Recent high-profile scientific achievements by machine learning (ML) and especially deep learning (DL) systems have reinvigorated interest in ML for automated scientific discovery (eg, Wang et al. 2023). Much of this work is motivated by the thought that DL methods might facilitate the efficient discovery of phenomena, hypotheses, or even models or theories more efficiently than traditional, theory-driven approaches to discovery. This talk considers some of the more specific obstacles to automated, DL-driven discovery in frontier science, focusing on gravitational-wave astrophysics (GWA) as a representative case study. In the first part of the talk, we argue that despite these efforts, prospects for DL-driven discovery in GWA remain uncertain. In the second part, we advocate a shift in focus towards the ways DL can be used to augment or enhance existing discovery methods, and the epistemic virtues and vices associated with these uses. We argue that the primary epistemic virtue of many such uses is to decrease opportunity costs associated with investigating puzzling or anomalous signals, and that the right framework for evaluating these uses comes from philosophical work on pursuitworthiness.
Trends in NeuroAI - Brain-like topography in transformers (Topoformer)
Dr. Nicholas Blauch will present on his work "Topoformer: Brain-like topographic organization in transformer language models through spatial querying and reweighting". Dr. Blauch is a postdoctoral fellow in the Harvard Vision Lab advised by Talia Konkle and George Alvarez. Paper link: https://openreview.net/pdf?id=3pLMzgoZSA Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).
Generative models for video games (rescheduled)
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Generative models for video games
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Maintaining Plasticity in Neural Networks
Nonstationarity presents a variety of challenges for machine learning systems. One surprising pathology which can arise in nonstationary learning problems is plasticity loss, whereby making progress on new learning objectives becomes more difficult as training progresses. Networks which are unable to adapt in response to changes in their environment experience plateaus or even declines in performance in highly non-stationary domains such as reinforcement learning, where the learner must quickly adapt to new information even after hundreds of millions of optimization steps. The loss of plasticity manifests in a cluster of related empirical phenomena which have been identified by a number of recent works, including the primacy bias, implicit under-parameterization, rank collapse, and capacity loss. While this phenomenon is widely observed, it is still not fully understood. This talk will present exciting recent results which shed light on the mechanisms driving the loss of plasticity in a variety of learning problems and survey methods to maintain network plasticity in non-stationary tasks, with a particular focus on deep reinforcement learning.
Trends in NeuroAI - Unified Scalable Neural Decoding (POYO)
Lead author Mehdi Azabou will present on his work "POYO-1: A Unified, Scalable Framework for Neural Population Decoding" (https://poyo-brain.github.io/). Mehdi is an ML PhD student at Georgia Tech advised by Dr. Eva Dyer. Paper link: https://arxiv.org/abs/2310.16046 Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).
Machine learning for reconstructing, understanding and intervening on neural interactions
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: Brain-optimized inference improves reconstructions of fMRI brain activity Abstract: The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas. Speaker: Reese Kneeland is a Ph.D. student at the University of Minnesota working in the Naselaris lab. Paper link: https://arxiv.org/abs/2312.07705
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812
Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916
Mathematical and computational modelling of ocular hemodynamics: from theory to applications
Changes in ocular hemodynamics may be indicative of pathological conditions in the eye (e.g. glaucoma, age-related macular degeneration), but also elsewhere in the body (e.g. systemic hypertension, diabetes, neurodegenerative disorders). Thanks to its transparent fluids and structures that allow the light to go through, the eye offers a unique window on the circulation from large to small vessels, and from arteries to veins. Deciphering the causes that lead to changes in ocular hemodynamics in a specific individual could help prevent vision loss as well as aid in the diagnosis and management of diseases beyond the eye. In this talk, we will discuss how mathematical and computational modelling can help in this regard. We will focus on two main factors, namely blood pressure (BP), which drives the blood flow through the vessels, and intraocular pressure (IOP), which compresses the vessels and may impede the flow. Mechanism-driven models translates fundamental principles of physics and physiology into computable equations that allow for identification of cause-to-effect relationships among interplaying factors (e.g. BP, IOP, blood flow). While invaluable for causality, mechanism-driven models are often based on simplifying assumptions to make them tractable for analysis and simulation; however, this often brings into question their relevance beyond theoretical explorations. Data-driven models offer a natural remedy to address these short-comings. Data-driven methods may be supervised (based on labelled training data) or unsupervised (clustering and other data analytics) and they include models based on statistics, machine learning, deep learning and neural networks. Data-driven models naturally thrive on large datasets, making them scalable to a plethora of applications. While invaluable for scalability, data-driven models are often perceived as black- boxes, as their outcomes are difficult to explain in terms of fundamental principles of physics and physiology and this limits the delivery of actionable insights. The combination of mechanism-driven and data-driven models allows us to harness the advantages of both, as mechanism-driven models excel at interpretability but suffer from a lack of scalability, while data-driven models are excellent at scale but suffer in terms of generalizability and insights for hypothesis generation. This combined, integrative approach represents the pillar of the interdisciplinary approach to data science that will be discussed in this talk, with application to ocular hemodynamics and specific examples in glaucoma research.
BrainLM Journal Club
Connor Lane will lead a journal club on the recent BrainLM preprint, a foundation model for fMRI trained using self-supervised masked autoencoder training. Preprint: https://www.biorxiv.org/content/10.1101/2023.09.12.557460v1 Tweeprint: https://twitter.com/david_van_dijk/status/1702336882301112631?t=Q2-U92-BpJUBh9C35iUbUA&s=19
Self as Processes (BACN Mid-career Prize Lecture 2023)
An understanding of the self helps explain not only human thoughts, feelings, attitudes but also many aspects of everyday behaviour. This talk focuses on a viewpoint - self as processes. This viewpoint emphasizes the dynamics of the self that best connects with the development of the self over time and its realist orientation. We are combining psychological experiments and data mining to comprehend the stability and adaptability of the self across various populations. In this talk, I draw on evidence from experimental psychology, cognitive neuroscience, and machine learning approaches to demonstrate why and how self-association affects cognition and how it is modulated by various social experiences and situational factors
Foundation models in ophthalmology
Abstract to follow.
Algonauts 2023 winning paper journal club (fMRI encoding models)
Algonauts 2023 was a challenge to create the best model that predicts fMRI brain activity given a seen image. Huze team dominated the competition and released a preprint detailing their process. This journal club meeting will involve open discussion of the paper with Q/A with Huze. Paper: https://arxiv.org/pdf/2308.01175.pdf Related paper also from Huze that we can discuss: https://arxiv.org/pdf/2307.14021.pdf
1.8 billion regressions to predict fMRI (journal club)
Public journal club where this week Mihir will present on the 1.8 billion regressions paper (https://www.biorxiv.org/content/10.1101/2022.03.28.485868v2), where the authors use hundreds of pretrained model embeddings to best predict fMRI activity.
In search of the unknown: Artificial intelligence and foraging
Decoding mental conflict between reward and curiosity in decision-making
Humans and animals are not always rational. They not only rationally exploit rewards but also explore an environment owing to their curiosity. However, the mechanism of such curiosity-driven irrational behavior is largely unknown. Here, we developed a decision-making model for a two-choice task based on the free energy principle, which is a theory integrating recognition and action selection. The model describes irrational behaviors depending on the curiosity level. We also proposed a machine learning method to decode temporal curiosity from behavioral data. By applying it to rat behavioral data, we found that the rat had negative curiosity, reflecting conservative selection sticking to more certain options and that the level of curiosity was upregulated by the expected future information obtained from an uncertain environment. Our decoding approach can be a fundamental tool for identifying the neural basis for reward–curiosity conflicts. Furthermore, it could be effective in diagnosing mental disorders.
Diverse applications of artificial intelligence and mathematical approaches in ophthalmology
Ophthalmology is ideally placed to benefit from recent advances in artificial intelligence. It is a highly image-based specialty and provides unique access to the microvascular circulation and the central nervous system. This talk will demonstrate diverse applications of machine learning and deep learning techniques in ophthalmology, including in age-related macular degeneration (AMD), the leading cause of blindness in industrialized countries, and cataract, the leading cause of blindness worldwide. This will include deep learning approaches to automated diagnosis, quantitative severity classification, and prognostic prediction of disease progression, both from images alone and accompanied by demographic and genetic information. The approaches discussed will include deep feature extraction, label transfer, and multi-modal, multi-task training. Cluster analysis, an unsupervised machine learning approach to data classification, will be demonstrated by its application to geographic atrophy in AMD, including exploration of genotype-phenotype relationships. Finally, mediation analysis will be discussed, with the aim of dissecting complex relationships between AMD disease features, genotype, and progression.
How AI is advancing Clinical Neuropsychology and Cognitive Neuroscience
This talk aims to highlight the immense potential of Artificial Intelligence (AI) in advancing the field of psychology and cognitive neuroscience. Through the integration of machine learning algorithms, big data analytics, and neuroimaging techniques, AI has the potential to revolutionize the way we study human cognition and brain characteristics. In this talk, I will highlight our latest scientific advancements in utilizing AI to gain deeper insights into variations in cognitive performance across the lifespan and along the continuum from healthy to pathological functioning. The presentation will showcase cutting-edge examples of AI-driven applications, such as deep learning for automated scoring of neuropsychological tests, natural language processing to characeterize semantic coherence of patients with psychosis, and other application to diagnose and treat psychiatric and neurological disorders. Furthermore, the talk will address the challenges and ethical considerations associated with using AI in psychological research, such as data privacy, bias, and interpretability. Finally, the talk will discuss future directions and opportunities for further advancements in this dynamic field.
Relations and Predictions in Brains and Machines
Humans and animals learn and plan with flexibility and efficiency well beyond that of modern Machine Learning methods. This is hypothesized to owe in part to the ability of animals to build structured representations of their environments, and modulate these representations to rapidly adapt to new settings. In the first part of this talk, I will discuss theoretical work describing how learned representations in hippocampus enable rapid adaptation to new goals by learning predictive representations, while entorhinal cortex compresses these predictive representations with spectral methods that support smooth generalization among related states. I will also cover recent work extending this account, in which we show how the predictive model can be adapted to the probabilistic setting to describe a broader array of generalization results in humans and animals, and how entorhinal representations can be modulated to support sample generation optimized for different behavioral states. In the second part of the talk, I will overview some of the ways in which we have combined many of the same mathematical concepts with state-of-the-art deep learning methods to improve efficiency and performance in machine learning applications like physical simulation, relational reasoning, and design.
Bridging machine learning and mechanistic modelling
Deep learning applications in ophthalmology
Deep learning techniques have revolutionized the field of image analysis and played a disruptive role in the ability to quickly and efficiently train image analysis models that perform as well as human beings. This talk will cover the beginnings of the application of deep learning in the field of ophthalmology and vision science, and cover a variety of applications of using deep learning as a method for scientific discovery and latent associations.
Understanding Machine Learning via Exactly Solvable Statistical Physics Models
The affinity between statistical physics and machine learning has a long history. I will describe the main lines of this long-lasting friendship in the context of current theoretical challenges and open questions about deep learning. Theoretical physics often proceeds in terms of solvable synthetic models, I will describe the related line of work on solvable models of simple feed-forward neural networks. I will highlight a path forward to capture the subtle interplay between the structure of the data, the architecture of the network, and the optimization algorithms commonly used for learning.
Exploring the Potential of High-Density Data for Neuropsychological Testing with Coregraph
Coregraph is a tool under development that allows us to collect high-density data patterns during the administration of classic neuropsychological tests such as the Trail Making Test and Clock Drawing Test. These tests are widely used to evaluate cognitive function and screen for neurodegenerative disorders, but traditional methods of data collection only yield sparse information, such as test completion time or error types. By contrast, the high-density data collected with Coregraph may contribute to a better understanding of the cognitive processes involved in executing these tests. In addition, Coregraph may potentially revolutionize the field of cognitive evaluation by aiding in the prediction of cognitive deficits and in the identification of early signs of neurodegenerative disorders such as Alzheimer's dementia. By analyzing high-density graphomotor data through techniques like manual feature engineering and machine learning, we can uncover patterns and relationships that would be otherwise hidden with traditional methods of data analysis. We are currently in the process of determining the most effective methods of feature extraction and feature analysis to develop Coregraph to its full potential.
Maths, AI and Neuroscience Meeting Stockholm
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent.
Can a single neuron solve MNIST? Neural computation of machine learning tasks emerges from the interaction of dendritic properties
Physiological experiments have highlighted how the dendrites of biological neurons can nonlinearly process distributed synaptic inputs. However, it is unclear how qualitative aspects of a dendritic tree, such as its branched morphology, its repetition of presynaptic inputs, voltage-gated ion channels, electrical properties and complex synapses, determine neural computation beyond this apparent nonlinearity. While it has been speculated that the dendritic tree of a neuron can be seen as a multi-layer neural network and it has been shown that such an architecture could be computationally strong, we do not know if that computational strength is preserved under these qualitative biological constraints. Here we simulate multi-layer neural network models of dendritic computation with and without these constraints. We find that dendritic model performance on interesting machine learning tasks is not hurt by most of these constraints and may synergistically benefit from all of them combined. Our results suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks through the emergent capabilities afforded by their properties.
Experimental Neuroscience Bootcamp
This course provides a fundamental foundation in the modern techniques of experimental neuroscience. It introduces the essentials of sensors, motor control, microcontrollers, programming, data analysis, and machine learning by guiding students through the “hands on” construction of an increasingly capable robot. In parallel, related concepts in neuroscience are introduced as nature’s solution to the challenges students encounter while designing and building their own intelligent system.
On the link between conscious function and general intelligence in humans and machines
In popular media, there is often a connection drawn between the advent of awareness in artificial agents and those same agents simultaneously achieving human or superhuman level intelligence. In this talk, I will examine the validity and potential application of this seemingly intuitive link between consciousness and intelligence. I will do so by examining the cognitive abilities associated with three contemporary theories of conscious function: Global Workspace Theory (GWT), Information Generation Theory (IGT), and Attention Schema Theory (AST), and demonstrating that all three theories specifically relate conscious function to some aspect of domain-general intelligence in humans. With this insight, we will turn to the field of Artificial Intelligence (AI) and find that, while still far from demonstrating general intelligence, many state-of-the-art deep learning methods have begun to incorporate key aspects of each of the three functional theories. Given this apparent trend, I will use the motivating example of mental time travel in humans to propose ways in which insights from each of the three theories may be combined into a unified model. I believe that doing so can enable the development of artificial agents which are not only more generally intelligent but are also consistent with multiple current theories of conscious function.
From Machine Learning to Autonomous Intelligence
How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable.
Learning Relational Rules from Rewards
Humans perceive the world in terms of objects and relations between them. In fact, for any given pair of objects, there is a myriad of relations that apply to them. How does the cognitive system learn which relations are useful to characterize the task at hand? And how can it use these representations to build a relational policy to interact effectively with the environment? In this paper we propose that this problem can be understood through the lens of a sub-field of symbolic machine learning called relational reinforcement learning (RRL). To demonstrate the potential of our approach, we build a simple model of relational policy learning based on a function approximator developed in RRL. We trained and tested our model in three Atari games that required to consider an increasingly number of potential relations: Breakout, Pong and Demon Attack. In each game, our model was able to select adequate relational representations and build a relational policy incrementally. We discuss the relationship between our model with models of relational and analogical reasoning, as well as its limitations and future directions of research.
From Machine Learning to Autonomous Intelligence
How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self-supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable. The corresponding working paper is available here:https://openreview.net/forum?id=BZ5a1r-kVsf
Neuromatch 5
Neuromatch 5 (Neuromatch Conference 2022) was a fully virtual conference focused on computational neuroscience broadly construed, including machine learning work with explicit biological links:contentReference[oaicite:11]{index=11}. After four successful Neuromatch conferences, the fifth edition consolidated proven innovations from past events, featuring a series of talks hosted on Crowdcast and flash talk sessions (pre-recorded videos) with dedicated discussion times on Reddit:contentReference[oaicite:12]{index=12}.
Spontaneous Emergence of Computation in Network Cascades
Neuronal network computation and computation by avalanche supporting networks are of interest to the fields of physics, computer science (computation theory as well as statistical or machine learning) and neuroscience. Here we show that computation of complex Boolean functions arises spontaneously in threshold networks as a function of connectivity and antagonism (inhibition), computed by logic automata (motifs) in the form of computational cascades. We explain the emergent inverse relationship between the computational complexity of the motifs and their rank-ordering by function probabilities due to motifs, and its relationship to symmetry in function space. We also show that the optimal fraction of inhibition observed here supports results in computational neuroscience, relating to optimal information processing.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
Attention in Psychology, Neuroscience, and Machine Learning
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
Canonical neural networks perform active inference
The free-energy principle and active inference have received a significant attention in the fields of neuroscience and machine learning. However, it remains to be established whether active inference is an apt explanation for any given neural network that actively exchanges with its environment. To address this issue, we show that a class of canonical neural networks of rate coding models implicitly performs variational Bayesian inference under a well-known form of partially observed Markov decision process model (Isomura, Shimazaki, Friston, Commun Biol, 2022). Based on the proposed theory, we demonstrate that canonical neural networks—featuring delayed modulation of Hebbian plasticity—can perform planning and adaptive behavioural control in the Bayes optimal manner, through postdiction of their previous decisions. This scheme enables us to estimate implicit priors under which the agent’s neural network operates and identify a specific form of the generative model. The proposed equivalence is crucial for rendering brain activity explainable to better understand basic neuropsychology and psychiatric disorders. Moreover, this notion can dramatically reduce the complexity of designing self-learning neuromorphic hardware to perform various types of tasks.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
Meta-learning synaptic plasticity and memory addressing for continual familiarity detection
Over the course of a lifetime, we process a continual stream of information. Extracted from this stream, memories must be efficiently encoded and stored in an addressable manner for retrieval. To explore potential mechanisms, we consider a familiarity detection task where a subject reports whether an image has been previously encountered. We design a feedforward network endowed with synaptic plasticity and an addressing matrix, meta-learned to optimize familiarity detection over long intervals. We find that anti-Hebbian plasticity leads to better performance than Hebbian and replicates experimental results such as repetition suppression. A combinatorial addressing function emerges, selecting a unique neuron as an index into the synaptic memory matrix for storage or retrieval. Unlike previous models, this network operates continuously, and generalizes to intervals it has not been trained on. Our work suggests a biologically plausible mechanism for continual learning, and demonstrates an effective application of machine learning for neuroscience discovery.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
Hebbian Plasticity Supports Predictive Self-Supervised Learning of Disentangled Representations
Discriminating distinct objects and concepts from sensory stimuli is essential for survival. Our brains accomplish this feat by forming meaningful internal representations in deep sensory networks with plastic synaptic connections. Experience-dependent plasticity presumably exploits temporal contingencies between sensory inputs to build these internal representations. However, the precise mechanisms underlying plasticity remain elusive. We derive a local synaptic plasticity model inspired by self-supervised machine learning techniques that shares a deep conceptual connection to Bienenstock-Cooper-Munro (BCM) theory and is consistent with experimentally observed plasticity rules. We show that our plasticity model yields disentangled object representations in deep neural networks without the need for supervision and implausible negative examples. In response to altered visual experience, our model qualitatively captures neuronal selectivity changes observed in the monkey inferotemporal cortex in-vivo. Our work suggests a plausible learning rule to drive learning in sensory networks while making concrete testable predictions.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
Population coding in the cerebellum: a machine learning perspective
The cerebellum resembles a feedforward, three-layer network of neurons in which the “hidden layer” consists of Purkinje cells (P-cells) and the output layer consists of deep cerebellar nucleus (DCN) neurons. In this analogy, the output of each DCN neuron is a prediction that is compared with the actual observation, resulting in an error signal that originates in the inferior olive. Efficient learning requires that the error signal reach the DCN neurons, as well as the P-cells that project onto them. However, this basic rule of learning is violated in the cerebellum: the olivary projections to the DCN are weak, particularly in adulthood. Instead, an extraordinarily strong signal is sent from the olive to the P-cells, producing complex spikes. Curiously, P-cells are grouped into small populations that converge onto single DCN neurons. Why are the P-cells organized in this way, and what is the membership criterion of each population? Here, I apply elementary mathematics from machine learning and consider the fact that P-cells that form a population exhibit a special property: they can synchronize their complex spikes, which in turn suppress activity of DCN neuron they project to. Thus complex spikes cannot only act as a teaching signal for a P-cell, but through complex spike synchrony, a P-cell population may act as a surrogate teacher for the DCN neuron that produced the erroneous output. It appears that grouping of P-cells into small populations that share a preference for error satisfies a critical requirement of efficient learning: providing error information to the output layer neuron (DCN) that was responsible for the error, as well as the hidden layer neurons (P-cells) that contributed to it. This population coding may account for several remarkable features of behavior during learning, including multiple timescales, protection from erasure, and spontaneous recovery of memory.
CNStalk: Using machine learning to predict mental health on the basis of brain, behaviour and environment
GeNN
Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. We will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it interacts with other Open Source frameworks such as Brian2GeNN and PyNN.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
Interdisciplinary College
The Interdisciplinary College is an annual spring school which offers a dense state-of-the-art course program in neurobiology, neural computation, cognitive science/psychology, artificial intelligence, machine learning, robotics and philosophy. It is aimed at students, postgraduates and researchers from academia and industry. This year's focus theme "Flexibility" covers (but not be limited to) the nervous system, the mind, communication, and AI & robotics. All this will be packed into a rich, interdisciplinary program of single- and multi-lecture courses, and less traditional formats.
Implementing structure mapping as a prior in deep learning models for abstract reasoning
Building conceptual abstractions from sensory information and then reasoning about them is central to human intelligence. Abstract reasoning both relies on, and is facilitated by, our ability to make analogies about concepts from known domains to novel domains. Structure Mapping Theory of human analogical reasoning posits that analogical mappings rely on (higher-order) relations and not on the sensory content of the domain. This enables humans to reason systematically about novel domains, a problem with which machine learning (ML) models tend to struggle. We introduce a two-stage neural net framework, which we label Neural Structure Mapping (NSM), to learn visual analogies from Raven's Progressive Matrices, an abstract visual reasoning test of fluid intelligence. Our framework uses (1) a multi-task visual relationship encoder to extract constituent concepts from raw visual input in the source domain, and (2) a neural module net analogy inference engine to reason compositionally about the inferred relation in the target domain. Our NSM approach (a) isolates the relational structure from the source domain with high accuracy, and (b) successfully utilizes this structure for analogical reasoning in the target domain.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
Taming chaos in neural circuits
Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.
Machine learning for measuring and modeling the motor system
Modeling Visual Attention in Neuroscience, Psychology, and Machine Learning
Building mechanistic models of neural computations with simulation-based machine learning
Bernstein Conference 2024
A robust machine learning pipeline for the analysis of complex nightingale songs
Bernstein Conference 2024
Explainable Machine Learning Approach to Investigating Neural Bases of Brain State Classification
COSYNE 2022
Machine Learning Approaches Reveal Prominent Behavioral Alterations and Cognitive Dysfunction in a Humanized Alzheimer Model
COSYNE 2023
Machine learning of functional network and molecular mechanisms in autism spectrum disorder subtypes
COSYNE 2023
Cognitive and intelligence measures for ADHD identification by machine learning models
FENS Forum 2024
Enhancing hypothesis testing via interpretable machine learning frameworks
FENS Forum 2024
Machine learning approach applied to exploration of neuronal sensorimotor processing during a visuomotor rule-based task performed by a monkey
FENS Forum 2024
Machine learning-based exploration of long noncoding RNAs linked to perivascular lesions in the brain
FENS Forum 2024
Machine learning-based identification of ultrasonic vocalization subtypes during rat chocolate consumption
FENS Forum 2024
Machine learning identifies potential anti-glioblastoma small molecules
FENS Forum 2024
Machine learning for individual multivariate fingerprints as predictors of mental well-being in young adults
FENS Forum 2024
A machine learning toolbox to detect and compare sharp-wave ripples across species
FENS Forum 2024
Prediction of antipsychotic-induced extrapyramidal symptoms in schizophrenia using machine learning
FENS Forum 2024
Semi-blind machine learning for fMRI-based predictions of intelligence
FENS Forum 2024
Machine learning and topology classify neuronal morphologies
Neuromatch 5
Optimization techniques for machine learning based classification involving large-scale neuroscience datasets
Neuromatch 5
Predicting Math and Story-Related Auditory Tasks Completed in fMRI using a Logistic Regression Machine Learning Model
Neuromatch 5