Foundation Models
foundation models
Saeed Abdullah
The College of Information Sciences and Technology at Penn State is seeking a postdoctoral scholar to join an interdisciplinary team focusing on Human-AI collaboration to train mental health workers. The project is supported by an NSF grant. The position will involve developing computational methods to assess clinical sessions and provide actionable feedback to support effective training.
Saeed Abdullah
The College of Information Sciences and Technology at Penn State is seeking a postdoctoral scholar to join an interdisciplinary team focusing on Human-AI collaboration to train mental health workers. The project is supported by an NSF grant. The position will involve developing computational methods to assess clinical sessions and provide actionable feedback to support effective training.
N/A
We are looking for a Professorship in AI and Foundation Models (open rank).
Saeed Abdullah
The College of Information Sciences and Technology at Penn State is seeking a postdoctoral scholar to join an interdisciplinary team focusing on Human-AI collaboration to train mental health workers. The project is supported by an NSF grant. The position will involve developing computational methods to assess clinical sessions and provide actionable feedback to support effective training. An ideal candidate will have strong research skills and experience in relevant areas (e.g., foundation models, deep learning, natural language processing, reinforcement learning). It will be a full-time appointment for 24 months, with a possibility of renewal dependent upon performance.
Sarath Chandar
We (Sarath Chandar and Amal Zouaq) are seeking multiple postdoctoral researchers in Natural Language Processing (NLP) to work on large language models. The postdocs will be largely involved in various projects in LLMs, including but not limited to the following topics: Multi-agent / modular LLMs, LLM safety and Alignment, Bias and Fairness, Efficient Training Methods, Non-parametric memories, Constrained generation, LLMs and foundation models for biology and medical data. This position will be at Mila, the world-renowned AI hub located in Montreal, Canada – home to over 1000 researchers pushing the boundaries of AI research.
Prof. Baihan Lin
📢 Join the Lin Lab at Mount Sinai! We’re Hiring Postdocs, Research Assistants, and PhD Students! The Lin Lab, also known as the Bytes of Minds Lab, is on the lookout for driven researchers passionate about Computational Psychiatry and Neuro-AI. Directed by Dr. Baihan Lin (me!) and based at the Icahn School of Medicine at Mount Sinai, New York’s largest hospital network, our lab is uniquely positioned with access to vast data resources and a strong collaborative environment. We’re pushing the boundaries of mental health technology and brain-inspired AI to create intelligent systems that can transform healthcare and deepen our understanding of the mind. Why Join Us? 🍎 Cutting-edge Research: Tackle challenges in neuro-inspired AI, computational psychiatry, brain-computer interfaces, extended realities (XR), social media, wearables, and beyond. 🍎 Interdisciplinary Impact: Work at the intersection of advanced neuroscience, machine learning, and cognitive science to create adaptive AI systems, new tools for mental health, and next-gen neurotechnology. 🍎 Top-Tier Environment: Join Mount Sinai’s dynamic research community, within New York’s largest health system with the most diverse patient populations and a leading hub for AI in healthcare (ranked #1 by Nature). Whether you're a potential postdoc, PhD student, or someone looking for an interdisciplinary research experience, if you’re passionate about bridging the gap between bytes and minds, we want to hear from you! Learn more at linlab.org and apply by emailing me at baihan.lin@mssm.edu. Bytes of Minds Lab (Lin Lab) Departments of AI, Psychiatry, and Neuroscience Hasso Plattner Institute for Digital Health, Friedman Brain Institute, Center for Computational Psychiatry
Sarath Chandar
At Chandar Lab and Mila, the Quebec AI Institute, we have four open postdoctoral positions on the following topics: 1. Postdoc position on large language models (LLMs) - Topics of interest include but not limited to better pre-training methods, better fine-tuning methods, bias and fairness, interpretability, safety and alignment, continual pre-training. 2. Postdoc position on foundation models for biological data - Topics of interest include but not limited to foundation models (both encoder and decoder models) for proteins, small molecules, and genomics data, multi-modal foundation models for biological data, 3d generative modelling, drug discovery. For this position, we are looking for a candidate with strong ML/LLM/Transformers/Foundation Models background. If you do not have the biological domain background, but are interested in exploring AI for science, this is a perfect position for you. In this position, you will be working closely with our pharmaceutical partners who are experts in biology. 3. Postdoc position on foundation models for time series data - Topics of interest includes but not limited to better sequential architectures, state space models, recurrent neural networks, attention-free architectures, etc. 4. Postdoc position on foundation models for Astrophysics - The topics of interest includes but not limited to recurrent inference machines, Transformers, diffusion models for radio images in Astrophysics.
Prof. Gilles Louppe
PhD and postdoctoral opportunities are now available at the Science with AI Lab (SAIL) of the Montefiore Institute, University of Liège, for researchers interested in advancing deep learning and foundation models for scientific applications. Open positions include: a PhD/postdoc position in deep learning for scientific foundation models (Mosaic project), a PhD/postdoc position in foundation models for solving inverse problems (Mosaic project), a PhD/postdoc position on the emulation of regional climate models (MAR.ai project, in collaboration with Prof. Xavier Fettweis). These positions focus on methodological advances in deep learning, with AI for weather as a key application area. The positions offer competitive salaries, access to state-of-the-art GPU clusters, and the opportunity to publish in top-tier AI venues while making real-world impact. The positions are fully funded and available immediately (Mosaic project) or starting October 1, 2025 (MAR.ai project). The duration is 4 years for PhD candidates and 2 years for postdoctoral researchers.
Llama 3.1 Paper: The Llama Family of Models
Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a new set of foundation models, called Llama 3. It is a herd of language models that natively support multilinguality, coding, reasoning, and tool usage. Our largest model is a dense Transformer with 405B parameters and a context window of up to 128K tokens. This paper presents an extensive empirical evaluation of Llama 3. We find that Llama 3 delivers comparable quality to leading language models such as GPT-4 on a plethora of tasks. We publicly release Llama 3, including pre-trained and post-trained versions of the 405B parameter language model and our Llama Guard 3 model for input and output safety. The paper also presents the results of experiments in which we integrate image, video, and speech capabilities into Llama 3 via a compositional approach. We observe this approach performs competitively with the state-of-the-art on image, video, and speech recognition tasks. The resulting models are not yet being broadly released as they are still under development.
Foundation models in ophthalmology
Abstract to follow.
Unified C. elegans Neural Activity and Connectivity Datasets for Building Foundation Models of a Small Nervous System
Bernstein Conference 2024