generative models
Latest
“Development and application of gaze control models for active perception”
Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.
Generative models for video games (rescheduled)
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Generative models for video games
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Generative models of brain function: Inference, networks, and mechanisms
This talk will focus on the generative modelling of resting state time series or endogenous neuronal activity. I will survey developments in modelling distributed neuronal fluctuations – spectral dynamic causal modelling (DCM) for functional MRI – and how this modelling rests upon functional connectivity. The dynamics of brain connectivity has recently attracted a lot of attention among brain mappers. I will also show a novel method to identify dynamic effective connectivity using spectral DCM. Further, I will summarise the development of the next generation of DCMs towards large-scale, whole-brain schemes which are computationally inexpensive, to the other extreme of the development using more sophisticated and biophysically detailed generative models based on the canonical microcircuits.
Generative models of the human connectome
The human brain is a complex network of neuronal connections. The precise arrangement of these connections, otherwise known as the topology of the network, is crucial to its functioning. Recent efforts to understand how the complex topology of the brain has emerged have used generative mathematical models, which grow synthetic networks according to specific wiring rules. Evidence suggests that a wiring rule which emulates a trade-off between connection costs and functional benefits can produce networks that capture essential topological properties of brain networks. In this webinar, Professor Alex Fornito and Dr Stuart Oldham will discuss these previous findings, as well as their own efforts in creating more physiologically constrained generative models. Professor Alex Fornito is Head of the Brain Mapping and Modelling Research Program at the Turner Institute for Brain and Mental Health. His research focuses on developing new imaging techniques for mapping human brain connectivity and applying these methods to shed light on brain function in health and disease. Dr Stuart Oldham is a Research Fellow at the Turner Institute for Brain and Mental Health and a Research Officer at the Murdoch Children’s Research Institute. He is interested in characterising the organisation of human brain networks, with particular focus on how this organisation develops, using neuroimaging and computational tools.
Navigating through the Latent Spaces in Generative Models
Bernstein Conference 2024
Generative models for building a worm's mind
COSYNE 2023
Implicit generative models using Kernel Similarity Matching
COSYNE 2025
generative models coverage
8 items