← Back

Image Analysis

Topic spotlight
TopicWorld Wide

image analysis

Discover seminars, jobs, and research tagged with image analysis across World Wide.
14 curated items8 Seminars5 Positions1 ePoster
Updated 2 days ago
14 items · image analysis
14 results
Position

Dr Adam Tyson

Sainsbury Wellcome Centre
London, U.K.
Dec 5, 2025

We are recruiting a Research Software Engineer to work between the Sainsbury Wellcome Centre (SWC) Neuroinformatics Unit (NIU) and Advanced Microscopy facilities (AMF) to provide support for the analysis of light microscopy datasets. The AMF provides support for all light microscopy at SWC including both in-vivo functional imaging within research laboratories and histological imaging performed within the facility itself. The facility provides histology and tissue clearing as a service, along with custom-built microscopes (lightsheet and serial-section two-photon) for whole-brain imaging. The NIU (https://neuroinformatics.dev) is a Research Software Engineering team dedicated to the development of high quality, accurate, robust, easy to use and maintainable open-source software for neuroscience and machine learning. We collaborate with researchers and other software engineers to advance research in the two research centres and make new algorithms and tools available to the global community. The NIU leads the development of the BrainGlobe (https://brainglobe.info/) computational neuroanatomy initiative. The aim of this position is to work in both the NIU and AMF to develop data processing and analysis software and advise researchers how best to analyse their data. The postholder will be embedded within both teams to help optimise all steps in the software development process.

Position

Dr Adam Tyson

Sainsbury Wellcome Centre, UCL
London, U.K.
Dec 5, 2025

We are recruiting a Research Software Engineer to work between the Sainsbury Wellcome Centre (SWC) Neuroinformatics Unit (NIU) and Advanced Microscopy facilities (AMF) to provide support for the analysis of light microscopy datasets. The AMF provides support for all light microscopy at SWC including both in-vivo functional imaging within research laboratories and histological imaging performed within the facility itself. The facility provides histology and tissue clearing as a service, along with custom-built microscopes (lightsheet and serial-section two-photon) for whole-brain imaging. The NIU (https://neuroinformatics.dev) is a Research Software Engineering team dedicated to the development of high quality, accurate, robust, easy to use and maintainable open-source software for neuroscience and machine learning. We collaborate with researchers and other software engineers to advance research in the two research centres and make new algorithms and tools available to the global community. The NIU leads the development of the BrainGlobe (https://brainglobe.info/) computational neuroanatomy initiative. The aim of this position is to work in both the NIU and AMF to develop data processing and analysis software and advise researchers how best to analyse their data. The postholder will be embedded within both teams to help optimise all steps in the software development process.

Position

Mick Hastings (Greg Jefferis)

MRC Laboratory of Molecular Biology, Cambridge
Cambridge, UK
Dec 5, 2025

Group leader (tenure-track/tenured) in Neurobiology with an emphasis on synaptic resolution imaging techniques to study structure and function of intact nervous systems. The MRC LMB is a research institute with stable core funding for staff, students and equipment in support of ambitious long-term research programmes. Research is the focus, admin is low. Co-located with the University of Cambridge, a world leader in the natural sciences. Teaching is possible but never required. This position is part of a new initiative in Molecular Connectomics supported by the MRC and LMB leadership within the Division of Neurobiology.

PositionArtificial Intelligence

Ekta Vats

Beijer Laboratory for Artificial Intelligence Research
Uppsala University, Sweden
Dec 5, 2025

A fully funded PhD position in Machine Learning and Computer Vision is available at Uppsala University, Sweden. The position is a part of the Beijer Laboratory for Artificial Intelligence Research, funded by Kjell and Märta Beijer Foundation. In this project you will join us in conducting fundamental machine learning research and developing principled foundations of vision-language models, with opportunities to validate the methods on challenging real-world problems involving computer vision.

Position

Ekta Vats

Department of Information Technology, Division of Systems and Control, Beijer Laboratory for Artificial Intelligence Research
Uppsala University, Sweden
Dec 5, 2025

We announce a fully-funded 2 year postdoctoral researcher position in Multimodal Deep Learning at Uppsala University, Sweden. Multimodal Vision-Language models integrate computer vision and natural language processing techniques to process and generate information that combines both visual and textual modalities, enabling a more profound understanding of the content within images and videos. Vision-language models exhibit promising potential, and there are several important research challenges to explore. Effective integration of both modalities (vision and language), and aligning visual and text embeddings into a cohesive embedding space continue to pose significant challenges. In this project, the successful candidate will conduct fundamental research and methods development towards designing efficient multimodal models and exploring their applications in computer vision. We are looking for a candidate with a deep learning background and an interest in working on the subject of vision-language modeling. The application areas of interest will be decided in a dialogue between the candidate and the supervisor, taking into account the candidate's interests and research proposal. The position also offers teaching possibilities up to 20%, in English or Swedish. The selected candidate will work in the Department of Information Technology, Division of Systems and Control, in Ekta Vats’ group and the Beijer Laboratory for Artificial Intelligence Research. The project offers a rich collaborative environment (spanning theoretical ML research together with partners at the SciML group), with participation in leading CV/ML conferences (ICML, NeurIPS, CVPR, ICCV, etc.) being expected.

SeminarNeuroscience

Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors

Prof. Dr. Ziad M. Hafed
Werner Reichardt Center for Integrative Neuroscience, and Hertie Institute for Clinical Brain Research University of Tübingen
Feb 11, 2025

The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.

SeminarArtificial IntelligenceRecording

Foundation models in ophthalmology

Pearse Keane
University College London and Moorfields Eye Hospital NHS Foundation Trust
Sep 5, 2023

Abstract to follow.

SeminarArtificial IntelligenceRecording

Diverse applications of artificial intelligence and mathematical approaches in ophthalmology

Tiarnán Keenan
National Eye Institute (NEI)
Jun 5, 2023

Ophthalmology is ideally placed to benefit from recent advances in artificial intelligence. It is a highly image-based specialty and provides unique access to the microvascular circulation and the central nervous system. This talk will demonstrate diverse applications of machine learning and deep learning techniques in ophthalmology, including in age-related macular degeneration (AMD), the leading cause of blindness in industrialized countries, and cataract, the leading cause of blindness worldwide. This will include deep learning approaches to automated diagnosis, quantitative severity classification, and prognostic prediction of disease progression, both from images alone and accompanied by demographic and genetic information. The approaches discussed will include deep feature extraction, label transfer, and multi-modal, multi-task training. Cluster analysis, an unsupervised machine learning approach to data classification, will be demonstrated by its application to geographic atrophy in AMD, including exploration of genotype-phenotype relationships. Finally, mediation analysis will be discussed, with the aim of dissecting complex relationships between AMD disease features, genotype, and progression.

SeminarArtificial IntelligenceRecording

Deep learning applications in ophthalmology

Aaron Lee
University of Washington
Mar 9, 2023

Deep learning techniques have revolutionized the field of image analysis and played a disruptive role in the ability to quickly and efficiently train image analysis models that perform as well as human beings. This talk will cover the beginnings of the application of deep learning in the field of ophthalmology and vision science, and cover a variety of applications of using deep learning as a method for scientific discovery and latent associations.

SeminarOpen SourceRecording

Introducing YAPiC: An Open Source tool for biologists to perform complex image segmentation with deep learning

Christoph Möhl
Core Research Facilities, German Center of Neurodegenerative Diseases (DZNE) Bonn.
Aug 26, 2021

Robust detection of biological structures such as neuronal dendrites in brightfield micrographs, tumor tissue in histological slides, or pathological brain regions in MRI scans is a fundamental task in bio-image analysis. Detection of those structures requests complex decision making which is often impossible with current image analysis software, and therefore typically executed by humans in a tedious and time-consuming manual procedure. Supervised pixel classification based on Deep Convolutional Neural Networks (DNNs) is currently emerging as the most promising technique to solve such complex region detection tasks. Here, a self-learning artificial neural network is trained with a small set of manually annotated images to eventually identify the trained structures from large image data sets in a fully automated way. While supervised pixel classification based on faster machine learning algorithms like Random Forests are nowadays part of the standard toolbox of bio-image analysts (e.g. Ilastik), the currently emerging tools based on deep learning are still rarely used. There is also not much experience in the community how much training data has to be collected, to obtain a reasonable prediction result with deep learning based approaches. Our software YAPiC (Yet Another Pixel Classifier) provides an easy-to-use Python- and command line interface and is purely designed for intuitive pixel classification of multidimensional images with DNNs. With the aim to integrate well in the current open source ecosystem, YAPiC utilizes the Ilastik user interface in combination with a high performance GPU server for model training and prediction. Numerous research groups at our institute have already successfully applied YAPiC for a variety of tasks. From our experience, a surprisingly low amount of sparse label data is needed to train a sufficiently working classifier for typical bioimaging applications. Not least because of this, YAPiC has become the "standard weapon” for our core facility to detect objects in hard-to-segement images. We would like to present some use cases like cell classification in high content screening, tissue detection in histological slides, quantification of neural outgrowth in phase contrast time series, or actin filament detection in transmission electron microscopy.

SeminarNeuroscience

Biomedical Image and Genetic Data Analysis with machine learning; applications in neurology and oncology

Wiro Niessen
Erasmus MC
Nov 8, 2020

In this presentation I will show the opportunities and challenges of big data analytics with AI techniques in medical imaging, also in combination with genetic and clinical data. Both conventional machine learning techniques, such as radiomics for tumor characterization, and deep learning techniques for studying brain ageing and prognosis in dementia, will be addressed. Also the concept of deep imaging, a full integration of medical imaging and machine learning, will be discussed. Finally, I will address the challenges of how to successfully integrate these technologies in daily clinical workflow.

SeminarNeuroscience

Matlab in neuroimaging (Part 3): Image analysis in Matlab

Xiangrui Li
Ohio State University
Jul 23, 2020

Last in a three-part lecture series about using matlab in neuroimaging. For full details go here: https://bit.ly/matlabimaging

SeminarNeuroscience

Machine reasoning in histopathologic image analysis

Phedias Diamandis
University of Toronto
Jul 8, 2020

Deep learning is an emerging computational approach inspired by the human brain’s neural connectivity that has transformed machine-based image analysis. By using histopathology as a model of an expert-level pattern recognition exercise, we explore the ability for humans to teach machines to learn and mimic image-recognition and decision making. Moreover, these models also allow exploration into the ability for computers to independently learn salient histological patterns and complex ontological relationships that parallel biological and expert knowledge without the need for explicit direction or supervision. Deciphering the overlap between human and unsupervised machine reasoning may aid in eliminating biases and improving automation and accountability for artificial intelligence-assisted vision tasks and decision-making. Aleksandar Ivanov Title:

ePoster

AI-driven image analysis for label-free quantification of chemotherapeutic cytotoxicity in glial cells

Jasmine Trigg, Gillian Lovell, Daniel Porto, Nevine Holtz, Nicola Bevan, Tim Dale

FENS Forum 2024