Object
object categorization
Professor Fiona Newell
Applications are invited for the role of Post-doctoral Researcher at Trinity College Institute of Neuroscience (TCIN). The aim of the project is to elucidate the behavioural and brain processes underpinning combined visual, auditory and haptic object categorization, in children and in adults, and assess the extent to which such categories adapt to changes in task conditions. The research adopts a multidisciplinary approach involving cognitive neuroscience, statistical modelling, and psychophysics. The candidate will work within the Multisensory Cognition lab, headed by Professor Fiona Newell, and will also work in collaboration with Prof. Robert Whelan and his research team. Both groups are based in TCIN and affiliated with the School of Psychology. The Multisensory Cognition lab has dedicated laboratory facility equipped with state-of art facilities for behavioural testing, including eye tracking and VR technology (HTC Vive and Oculus). TCIN also houses a research-dedicated MRI scanner, accessible to all principal investigators and their groups. The position is funded for 2 years initially, with a possibility for continuation, and is available immediately. Ideally, successful candidates are expected to take up the position no later than March 2022. The post-doctoral researcher will join a research team of PhD students, postdoctoral researchers, and a research assistant/lab manager and have the opportunity to collaborate with colleagues within the Institute of Neuroscience, across other disciplines in Trinity College, and industrial partners. They will participate in regular lab and collaborator meetings, learn about diverse methodologies in perceptual science and have the opportunity to attend major international conferences in the field. Funding Information The position is funded by Science Foundation Ireland, from a Future Frontiers Award to Fiona Newell (Principal Investigator).
Building System Models of Brain-Like Visual Intelligence with Brain-Score
Research in the brain and cognitive sciences attempts to uncover the neural mechanisms underlying intelligent behavior in domains such as vision. Due to the complexities of brain processing, studies necessarily had to start with a narrow scope of experimental investigation and computational modeling. I argue that it is time for our field to take the next step: build system models that capture a range of visual intelligence behaviors along with the underlying neural mechanisms. To make progress on system models, we propose integrative benchmarking – integrating experimental results from many laboratories into suites of benchmarks that guide and constrain those models at multiple stages and scales. We show-case this approach by developing Brain-Score benchmark suites for neural (spike rates) and behavioral experiments in the primate visual ventral stream. By systematically evaluating a wide variety of model candidates, we not only identify models beginning to match a range of brain data (~50% explained variance), but also discover that models’ brain scores are predicted by their object categorization performance (up to 70% ImageNet accuracy). Using the integrative benchmarks, we develop improved state-of-the-art system models that more closely match shallow recurrent neuroanatomy and early visual processing to predict primate temporal processing and become more robust, and require fewer supervised synaptic updates. Taken together, these integrative benchmarks and system models are first steps to modeling the complexities of brain processing in an entire domain of intelligence.
A computational explanation for domain specificity in the human brain
Many regions of the human brain conduct highly specific functions, such as recognizing faces, understanding language, and thinking about other people’s thoughts. Why might this domain specific organization be a good design strategy for brains, and what is the origin of domain specificity in the first place? In this talk, I will present recent work testing whether the segregation of face and object perception in human brains emerges naturally from an optimization for both tasks. We trained artificial neural networks on face and object recognition, and found that networks were able to perform both tasks well by spontaneously segregating them into distinct pathways. Critically, networks neither had prior knowledge nor any inductive bias about the tasks. Furthermore, networks optimized on tasks which apparently do not develop specialization in the human brain, such as food or cars, and object categorization showed less task segregation. These results suggest that functional segregation can spontaneously emerge without a task-specific bias, and that the domain-specific organization of the cortex may reflect a computational optimization for the real-world tasks humans solve.
Using retinotopic mapping in convolutional neural networks for object categorization leads to saliency-based visual object localization
FENS Forum 2024