Object Perception
object perception
Latest
A computational explanation for domain specificity in the human brain
Many regions of the human brain conduct highly specific functions, such as recognizing faces, understanding language, and thinking about other people’s thoughts. Why might this domain specific organization be a good design strategy for brains, and what is the origin of domain specificity in the first place? In this talk, I will present recent work testing whether the segregation of face and object perception in human brains emerges naturally from an optimization for both tasks. We trained artificial neural networks on face and object recognition, and found that networks were able to perform both tasks well by spontaneously segregating them into distinct pathways. Critically, networks neither had prior knowledge nor any inductive bias about the tasks. Furthermore, networks optimized on tasks which apparently do not develop specialization in the human brain, such as food or cars, and object categorization showed less task segregation. These results suggest that functional segregation can spontaneously emerge without a task-specific bias, and that the domain-specific organization of the cortex may reflect a computational optimization for the real-world tasks humans solve.
Human reconstruction of local image structure from natural scenes
Retinal projections often poorly represent the structure of the physical world: well-defined boundaries within the eye may correspond to irrelevant features of the physical world, while critical features of the physical world may be nearly invisible at the retinal projection. Visual cortex is equipped with specialized mechanisms for sorting these two types of features according to their utility in interpreting the scene, however we know little or nothing about their perceptual computations. I will present novel paradigms for the characterization of these processes in human vision, alongside examples of how the associated empirical results can be combined with targeted models to shape our understanding of the underlying perceptual mechanisms. Although the emerging view is far from complete, it challenges compartmentalized notions of bottom-up/top-down object segmentation, and suggests instead that these two modes are best viewed as an integrated perceptual mechanism.
object perception coverage
2 items