Computer Science
Events
Open full Events browserLoading...
Live and recorded talks from the researchers shaping this domain.
FLUXSynID: High-Resolution Synthetic Face Generation for Document and Live Capture Images
Synthetic face datasets are increasingly used to overcome the limitations of real-world biometric data, including privacy concerns, demographic imbalance, and high collection costs. However, many existing methods lack fine-grained control over identity attributes and fail to produce paired, identity-consistent images under structured capture conditions. In this talk, I will present FLUXSynID, a framework for generating high-resolution synthetic face datasets with user-defined identity attribute distributions and paired document-style and trusted live capture images. The dataset generated using FLUXSynID shows improved alignment with real-world identity distributions and greater diversity compared to prior work. I will also discuss how FLUXSynID’s dataset and generation tools can support research in face recognition and morphing attack detection (MAD), enhancing model robustness in both academic and practical applications.
Speaker
Raul Ismayilov • University of Twente
Scheduled for
Jul 1, 2025, 2:00 PM
Timezone
GMT+1
How Generative AI is Revolutionizing the Software Developer Industry
Generative AI is fundamentally transforming the software development industry by improving processes such as software testing, bug detection, bug fixes, and developer productivity. This talk explores how AI-driven techniques, particularly large language models (LLMs), are being utilized to generate realistic test scenarios, automate bug detection and repair, and streamline development workflows. As these technologies evolve, they promise to improve software quality and efficiency significantly. The discussion will cover key methodologies, challenges, and the future impact of generative AI on the software development lifecycle, offering a comprehensive overview of its revolutionary potential in the industry.
Speaker
Luca Di Grazia • Università della Svizzera Italiana
Scheduled for
Sep 30, 2024, 12:00 PM
Timezone
GMT+1
Llama 3.1 Paper: The Llama Family of Models
Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a new set of foundation models, called Llama 3. It is a herd of language models that natively support multilinguality, coding, reasoning, and tool usage. Our largest model is a dense Transformer with 405B parameters and a context window of up to 128K tokens. This paper presents an extensive empirical evaluation of Llama 3. We find that Llama 3 delivers comparable quality to leading language models such as GPT-4 on a plethora of tasks. We publicly release Llama 3, including pre-trained and post-trained versions of the 405B parameter language model and our Llama Guard 3 model for input and output safety. The paper also presents the results of experiments in which we integrate image, video, and speech capabilities into Llama 3 via a compositional approach. We observe this approach performs competitively with the state-of-the-art on image, video, and speech recognition tasks. The resulting models are not yet being broadly released as they are still under development.
Speaker
Vibhu Sapra
Scheduled for
Jul 28, 2024, 10:00 AM
Timezone
GMT+2
Error Consistency between Humans and Machines as a function of presentation duration
Within the last decade, Deep Artificial Neural Networks (DNNs) have emerged as powerful computer vision systems that match or exceed human performance on many benchmark tasks such as image classification. But whether current DNNs are suitable computational models of the human visual system remains an open question: While DNNs have proven to be capable of predicting neural activations in primate visual cortex, psychophysical experiments have shown behavioral differences between DNNs and human subjects, as quantified by error consistency. Error consistency is typically measured by briefly presenting natural or corrupted images to human subjects and asking them to perform an n-way classification task under time pressure. But for how long should stimuli ideally be presented to guarantee a fair comparison with DNNs? Here we investigate the influence of presentation time on error consistency, to test the hypothesis that higher-level processing drives behavioral differences. We systematically vary presentation times of backward-masked stimuli from 8.3ms to 266ms and measure human performance and reaction times on natural, lowpass-filtered and noisy images. Our experiment constitutes a fine-grained analysis of human image classification under both image corruptions and time pressure, showing that even drastically time-constrained humans who are exposed to the stimuli for only two frames, i.e. 16.6ms, can still solve our 8-way classification task with success rates way above chance. We also find that human-to-human error consistency is already stable at 16.6ms.
Speaker
Thomas Klein • Eberhard Karls Universität Tübingen
Scheduled for
Jun 30, 2024, 10:30 AM
Timezone
GMT+1
A modular, free and open source graphical interface for visualizing and processing electrophysiological signals in real-time
Portable biosensors become more popular every year. In this context, I propose NeuriGUI, a modular and cross-platform graphical interface that connects to those biosensors for real-time processing, exploring and storing of electrophysiological signals. The NeuriGUI acts as a common entry point in brain-computer interfaces, making it possible to plug in downstream third-party applications for real-time analysis of the incoming signal. NeuriGUI is 100% free and open source.
Speaker
David Baum • Research Engineer at InteraXon
Scheduled for
May 27, 2024, 12:00 PM
Timezone
GMT-3
Generative models for video games (rescheduled)
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Speaker
Katja Hoffman • Microsoft Research
Scheduled for
May 21, 2024, 2:00 PM
Timezone
GMT
Modelling the fruit fly brain and body
Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.
Speaker
Srinivas Turaga • HHMI | Janelia
Scheduled for
May 14, 2024, 2:00 PM
Timezone
GMT
Generative models for video games
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Speaker
Katja Hoffman • Microsoft Research
Scheduled for
Apr 30, 2024, 2:00 PM
Timezone
GMT
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812
Speaker
Paul Scotti
Scheduled for
Dec 6, 2023, 11:00 AM
Timezone
EST
Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916
Speaker
Junbeom Kwon
Scheduled for
Nov 20, 2023, 8:30 AM
Timezone
EST
The Neural Race Reduction: Dynamics of nonlinear representation learning in deep architectures
What is the relationship between task, network architecture, and population activity in nonlinear deep networks? I will describe the Gated Deep Linear Network framework, which schematizes how pathways of information flow impact learning dynamics within an architecture. Because of the gating, these networks can compute nonlinear functions of their input. We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning. The reduction takes the form of a neural race with an implicit bias towards shared representations, which then govern the model’s ability to systematically generalize, multi-task, and transfer. We show how appropriate network architectures can help factorize and abstract knowledge. Together, these results begin to shed light on the links between architecture, learning dynamics and network performance.
Speaker
Andrew Saxe • UCL
Scheduled for
Apr 13, 2023, 12:30 PM
Timezone
EST
Understanding and Mitigating Bias in Human & Machine Face Recognition
With the increasing use of automated face recognition (AFR) technologies, it is important to consider whether these systems not only perform accurately, but also equitability or without “bias”. Despite rising public, media, and scientific attention to this issue, the sources of bias in AFR are not fully understood. This talk will explore how human cognitive biases may impact our assessments of performance differentials in AFR systems and our subsequent use of those systems to make decisions. We’ll also show how, if we adjust our definition of what a “biased” AFR algorithm looks like, we may be able to create algorithms that optimize the performance of a human+algorithm team, not simply the algorithm itself.
Speaker
John Howard • Maryland Test Facility
Scheduled for
Apr 11, 2023, 4:00 PM
Timezone
GMT+1
Automated generation of face stimuli: Alignment, features and face spaces
I describe a well-tested Python module that does automated alignment and warping of faces images, and some advantages over existing solutions. An additional tool I’ve developed does automated extraction of facial features, which can be used in a number of interesting ways. I illustrate the value of wavelet-based features with a brief description of 2 recent studies: perceptual in-painting, and the robustness of the whole-part advantage across a large stimulus set. Finally, I discuss the suitability of various deep learning models for generating stimuli to study perceptual face spaces. I believe those interested in the forensic aspects of face perception may find this talk useful.
Speaker
Carl Gaspar • Zayed University (UAE)
Scheduled for
Jan 31, 2023, 2:00 PM
Timezone
GMT+1
Beyond Biologically Plausible Spiking Networks for Neuromorphic Computing
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features – event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST.
Speaker
A. Subramoney • University of Bochum
Scheduled for
Nov 8, 2022, 4:50 PM
Timezone
GMT+1
Brian2CUDA: Generating Efficient CUDA Code for Spiking Neural Networks
Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian’s CPU backend. Currently, Brian2CUDA is the only package that supports Brian’s full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.
Speaker
Denis Alevi • Berlin Institute of Technology (
Scheduled for
Nov 2, 2022, 4:00 PM
Timezone
GMT
Lifelong Learning AI via neuro inspired solutions
AI embedded in real systems, such as in satellites, robots and other autonomous devices, must make fast, safe decisions even when the environment changes, or under limitations on the available power; to do so, such systems must be adaptive in real time. To date, edge computing has no real adaptivity – rather the AI must be trained in advance, typically on a large dataset with much computational power needed; once fielded, the AI is frozen: It is unable to use its experience to operate if environment proves outside its training or to improve its expertise; and worse, since datasets cannot cover all possible real-world situations, systems with such frozen intelligent control are likely to fail. Lifelong Learning is the cutting edge of artificial intelligence - encompassing computational methods that allow systems to learn in runtime and incorporate learning for application in new, unanticipated situations. Until recently, this sort of computation has been found exclusively in nature; thus, Lifelong Learning looks to nature, and in particular neuroscience, for its underlying principles and mechanisms and then translates them to this new technology. Our presentation will introduce a number of state-of-the-art approaches to achieve AI adaptive learning, including from the DARPA’s L2M program and subsequent developments. Many environments are affected by temporal changes, such as the time of day, week, season, etc. A way to create adaptive systems which are both small and robust is by making them aware of time and able to comprehend temporal patterns in the environment. We will describe our current research in temporal AI, while also considering power constraints.
Speaker
Hava Siegelmann • University of Massachusetts Amherst
Scheduled for
Oct 26, 2022, 3:00 PM
Timezone
GMT
Learning Relational Rules from Rewards
Humans perceive the world in terms of objects and relations between them. In fact, for any given pair of objects, there is a myriad of relations that apply to them. How does the cognitive system learn which relations are useful to characterize the task at hand? And how can it use these representations to build a relational policy to interact effectively with the environment? In this paper we propose that this problem can be understood through the lens of a sub-field of symbolic machine learning called relational reinforcement learning (RRL). To demonstrate the potential of our approach, we build a simple model of relational policy learning based on a function approximator developed in RRL. We trained and tested our model in three Atari games that required to consider an increasingly number of potential relations: Breakout, Pong and Demon Attack. In each game, our model was able to select adequate relational representations and build a relational policy incrementally. We discuss the relationship between our model with models of relational and analogical reasoning, as well as its limitations and future directions of research.
Speaker
Guillermo Puebla • University of Bristol
Scheduled for
Oct 12, 2022, 11:00 AM
Timezone
CST
General purpose event-based architectures for deep learning
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features -- event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST
Speaker
Anand Subramoney • Institute for Neural Computation
Scheduled for
Oct 4, 2022, 3:00 PM
Timezone
GMT+1
Introducing dendritic computations to SNNs with Dendrify
Current SNNs studies frequently ignore dendrites, the thin membranous extensions of biological neurons that receive and preprocess nearly all synaptic inputs in the brain. However, decades of experimental and theoretical research suggest that dendrites possess compelling computational capabilities that greatly influence neuronal and circuit functions. Notably, standard point-neuron networks cannot adequately capture most hallmark dendritic properties. Meanwhile, biophysically detailed neuron models are impractical for large-network simulations due to their complexity, and high computational cost. For this reason, we introduce Dendrify, a new theoretical framework combined with an open-source Python package (compatible with Brian2) that facilitates the development of bioinspired SNNs. Dendrify, through simple commands, can generate reduced compartmental neuron models with simplified yet biologically relevant dendritic and synaptic integrative properties. Such models strike a good balance between flexibility, performance, and biological accuracy, allowing us to explore dendritic contributions to network-level functions while paving the way for developing more realistic neuromorphic systems.
Speaker
Michalis Pagkalos • IMBB FORTH
Scheduled for
Sep 6, 2022, 3:00 PM
Timezone
GMT+1
Computational Imaging: Augmenting Optics with Algorithms for Biomedical Microscopy and Neural Imaging
Computational imaging seeks to achieve novel capabilities and overcome conventional limitations by combining optics and algorithms. In this seminar, I will discuss two computational imaging technologies developed in Boston University Computational Imaging Systems lab, including Intensity Diffraction Tomography and Computational Miniature Mesoscope. In our intensity diffraction tomography system, we demonstrate 3D quantitative phase imaging on a simple LED array microscope. We develop both single-scattering and multiple-scattering models to image complex biological samples. In our Computational Miniature Mesoscope, we demonstrate single-shot 3D high-resolution fluorescence imaging across a wide field-of-view in a miniaturized platform. We develop methods to characterize 3D spatially varying aberrations and physical simulator-based deep learning strategies to achieve fast and accurate reconstructions. Broadly, I will discuss how synergies between novel optical instrumentation, physical modeling, and model- and learning-based computational algorithms can push the limits in biomedical microscopy and neural imaging.
Speaker
Lei Tian • Department of Electrical and Computer Engineering, Boston University
Scheduled for
Aug 21, 2022, 11:00 AM
Timezone
GMT-3
Faculty, staff, and research positions available across World Wide.
Uri
The new lab at UCSD, directed by Uri, is opening positions for machine learning/computer vision scientists. The lab is part of a new “Technology Sandbox” at UCSD, which includes a ThermoFisher cryoEM and mass spec center, a Nikon Imaging Center, and computational resources.
Location
UCSD
Apply by
Sep 26, 2025
Posted on
Nov 16, 2025
Posting age
2 days ago
Deadline passed
The application window has closed. Check back soon for new opportunities.
Prof. Dr.-Ing. Marcus Magnor
The job is a W3 Full Professorship for Artificial Intelligence in interactive Systems at Technische Universität Braunschweig. The role involves expanding the research area of data-driven methods for interactive and intelligent systems at the TU Braunschweig and strengthening the focal points 'Data Science' and 'Reliability' of the Department of Computer Science. The position holder is expected to have a strong background in Computer Science with a focus on Artificial Intelligence/Machine Learning, specifically in the areas of Dependable AI and Explainable AI. The role also involves teaching, topic-related courses in the areas of Artificial Intelligence and Machine Learning to complement the Bachelor's and Master's degree programs of the Department of Computer Science.
Location
Technische Universität Braunschweig, Germany
Apply by
Aug 31, 2024
Posted on
Nov 16, 2025
Posting age
2 days ago
Deadline passed
The application window has closed. Check back soon for new opportunities.
Justus Piater, Antonio Rodríguez-Sánchez, Samuele Tosatto
This is a university doctoral position that involves minor teaching duties. The precise research topics are negotiable within the scope of active research at IIS, including machine learning and growing levels of AI for computer vision and robotics. Of particular interest are topics in representation learning and causality for out-of-distribution situations.
Location
University of Innsbruck, Austria
Apply by
Jul 27, 2024
Posted on
Nov 16, 2025
Posting age
2 days ago
Deadline passed
The application window has closed. Check back soon for new opportunities.
Prof. Dr.-Ing. Marcus Magnor
The position holder has a strong background in Computer Science with a focus on Artificial Intelligence/Machine Learning, specifically in the areas of Dependable AI and Explainable AI. Applicants should possess a method-oriented research focus on machine learning and have made internationally recognized contributions to at least one of the current research areas such as neural networks, generative and adversarial models, online and transfer learning, federated learning, (deep) reinforcement learning, probabilistic inference, graphical models, and/or MDP/POMDP. A researcher is sought who is able to combine the theoretical-methodological investigation and development of learning methods with applications in interactive intelligent systems, for example in autonomous robots, intelligent virtual agents, or intelligent networked production systems. Suitable applicants are expected to show an active interest in the concrete implementation of cognitive abilities in technical systems, ensuring compatibility with partners in engineering and natural sciences. With his/her research performance, the position holder will enhance the international visibility of TU Braunschweig in the field of Artificial Intelligence. In teaching, topic-related courses in the areas of Artificial Intelligence and Machine Learning shall complement the Bachelor's and Master's degree programs of the Department of Computer Science. In particular, the topic of Machine Learning/Artificial Intelligence is to be anchored in undergraduate teaching with a new compulsory Bachelor course. Participation in the academic self-administration of the university is expected as well as the willingness to actively shape computer science at the TU Braunschweig.
Location
Technische Universität Braunschweig, Germany
Apply by
Aug 31, 2024
Posted on
Nov 16, 2025
Posting age
2 days ago
Deadline passed
The application window has closed. Check back soon for new opportunities.
Prof. Dr.-Ing. Marcus Magnor
The Technische Universität Braunschweig is offering a W3 Full Professorship for Artificial Intelligence in interactive Systems. The position holder is expected to have a strong background in Computer Science with a focus on Artificial Intelligence/Machine Learning, specifically in the areas of Dependable AI and Explainable AI. The researcher is expected to combine the theoretical-methodological investigation and development of learning methods with applications in interactive intelligent systems. In teaching, topic-related courses in the areas of Artificial Intelligence and Machine Learning shall complement the Bachelor's and Master's degree programs of the Department of Computer Science. Participation in the academic self-administration of the university is expected as well as the willingness to actively shape computer science at the TU Braunschweig.
Location
Technische Universität Braunschweig, Germany
Apply by
Aug 31, 2024
Posted on
Nov 16, 2025
Posting age
2 days ago
Deadline passed
The application window has closed. Check back soon for new opportunities.
Dr. D.M. Lyons
Fordham University (New York City) has developed a unique Ph.D. program in Computer Science, tuned to the latest demands and opportunities of the field. Upon completion of the Ph.D. in Computer Science program, students will be able to demonstrate the fundamental, analytical, and computational knowledge and methodology needed to conduct original research and practical experiments in the foundations and theory of computer science, in software and systems, and in informatics and data analytics. They will also be able to apply computing and informatics methods and techniques to understand, analyze, and solve a variety of significant, real-world problems and issues in the cyber, physical, and social domains. Furthermore, they will be able to conduct original, high-quality, ethically informed, scientific research and publish in respected, peer-reviewed, journals and conferences. Lastly, they will be able to effectively instruct others in a variety of topics in Computer Science at the university level, addressing ethics, justice, diversity, and sustainability. This training and education means that graduates can pursue careers at the university level, but also research and leadership positions in industry and government and within the leading technology companies. A hallmark of the program is early involvement in research, within the first two years of the program. Students will have the opportunity to carry out research in machine learning and AI/robotics, big data analytics and informatics, data and information fusion, information and cyber security, and software engineering and systems.
Location
New York City
Apply by
Jan 9, 2024
Posted on
Nov 16, 2025
Posting age
2 days ago
Deadline passed
The application window has closed. Check back soon for new opportunities.
N/A
The Faculty of Computer Science of HSE University invites applications for full-time, tenure-track positions of Assistant Professor in all areas of computer science including but not limited to artificial intelligence, machine learning, computer vision, programming language theory, software engineering, system programming, algorithms, computation complexity, distributed and parallel computation, bioinformatics, human-computer interaction, and robotics. The successful candidate is expected to conduct high-quality research publishable in reputable peer-reviewed journals with research support provided by the University.
Location
Moscow, Russia
Apply by
Jan 15, 2024
Posted on
Nov 16, 2025
Posting age
2 days ago
Deadline passed
The application window has closed. Check back soon for new opportunities.
N/A
The post holder will support the delivery of key areas of teaching and research in the discipline of Computing. They will contribute to the delivery of undergraduate and postgraduate degree programmes offered by the School of Computing, Engineering and Intelligent Systems, and undertake research in areas aligned with the Intelligent Systems Research Centre (ISRC).
Location
Cromore Road, Coleraine, Co. Londonderry BT52 1SA
Apply by
Sep 4, 2024
Posted on
Nov 16, 2025
Posting age
2 days ago
Deadline passed
The application window has closed. Check back soon for new opportunities.
Joël Ouaknine
The successful candidate will work in close collaboration with academic and industrial partners, delving deep into the verification of Large Language Models (LLMs) based software programs. The focus of the position includes designing and implementing innovative verification methods to ensure the reliability and accuracy of LLM-based software programs, and actively engaging in the design and development of a system that generates high-quality data utilising LLMs. The project involves establishing methods to validate the efficacy of the prompting processes in getting accurate responses from LLMs and developing strategies to verify the overall reliability of the LLM-based software program. The postdoctoral researcher will further refine this approach, aiding in the development of a system optimised for high-quality data generation using LLMs. The successful candidate is expected to spend one or more internships in industry and liaise with industrial partners.
Location
Saarbrücken, Germany
Apply by
Oct 22, 2024
Posted on
Nov 16, 2025
Posting age
2 days ago
Deadline passed
The application window has closed. Check back soon for new opportunities.
Dr James Stovold
Two new Assistant Professor positions available at Lancaster University Leipzig, one with a focus on data science, one focussed on cyber security. Lancaster University Leipzig is a young branch campus of Lancaster University, based in Leipzig Germany. The department is focussed on intelligent systems, including topics such as emergent behaviour, unconventional computing, quantum software engineering, and robotics. These are grade 8 openings, so we would expect to see a research plan and teaching experience from interested applicants (grade 7 is the typical entry-level asst. prof. level).
Location
Leipzig, Germany
Apply by
Sep 26, 2025
Posted on
Nov 16, 2025
Posting age
2 days ago
Deadline passed
The application window has closed. Check back soon for new opportunities.
Recurring lecture series, cohorts, and thematic programming.
AFC Lab & CARLA Talk Series
Next activity
Nov 16, 2025
Organised by
World Wide Network
Sussex Visions
Next activity
Nov 16, 2025
Organised by
World Wide Network
LIBRE hub Seminar
Next activity
Nov 16, 2025
Organised by
World Wide Network