← Back

Robots

Topic spotlight
TopicWorld Wide

robots

Discover seminars, jobs, and research tagged with robots across World Wide.
23 curated items21 Seminars2 Positions
Updated 2 days ago
23 items · robots
23 results
PositionComputational Neuroscience

Laurent Perrinet

LEAT research lab, INT institute
Sophia-Antipolis and Marseille
Dec 5, 2025

This PhD subject focuses on the association between attention and spiking neural networks for defining new efficient AI models for embedded systems such as drones, robots and more generally autonomous systems. The thesis will take place between the LEAT research lab in Sophia-Antipolis and the INT institute in Marseille which both develop complementary approaches on bio-inspired AI from neuroscience to embedded systems design.

SeminarCognition

Why robots? A brief introduction to the use of robots in psychological research

Junko Kanero
Sabanci University
Jun 4, 2023

Why should psychologists be interested in robots? This talk aims to illustrate how social robots – machines with human-like features and behaviors – can offer interesting insights into the human mind. I will first provide a brief overview of how robots have been used in psychology and cognitive science research focusing on two approaches - Developmental Robotics and Human-Robot Interaction (HRI). We will then delve into recent works in HRI, including my own, in greater detail. We will also address the limitations of research thus far, such as the lack of proper controlled experiments, and discuss how the scientific community should evaluate the use of technology in educational and other social settings.

SeminarNeuroscienceRecording

Analogical Reasoning and Generalization for Interactive Task Learning in Physical Machines

Shiwali Mohan
Palo Alto Research Center
Mar 30, 2023

Humans are natural teachers; learning through instruction is one of the most fundamental ways that we learn. Interactive Task Learning (ITL) is an emerging research agenda that studies the design of complex intelligent robots that can acquire new knowledge through natural human teacher-robot learner interactions. ITL methods are particularly useful for designing intelligent robots whose behavior can be adapted by humans collaborating with them. In this talk, I will summarize our recent findings on the structure that human instruction naturally has and motivate an intelligent system design that can exploit their structure. The system – AILEEN – is being developed using the common model of cognition. Architectures that implement the Common Model of Cognition - Soar, ACT-R, and Sigma - have a prominent place in research on cognitive modeling as well as on designing complex intelligent agents. However, they miss a critical piece of intelligent behavior – analogical reasoning and generalization. I will introduce a new memory – concept memory – that integrates with a common model of cognition architecture and supports ITL.

SeminarNeuroscienceRecording

Children-Agent Interaction For Assessment and Rehabilitation: From Linguistic Skills To Mental Well-being

Micole Spitale
Department of Computer Science and Technology, University of Cambridge
Feb 6, 2023

Socially Assistive Robots (SARs) have shown great potential to help children in therapeutic and healthcare contexts. SARs have been used for companionship, learning enhancement, social and communication skills rehabilitation for children with special needs (e.g., autism), and mood improvement. Robots can be used as novel tools to assess and rehabilitate children’s communication skills and mental well-being by providing affordable and accessible therapeutic and mental health services. In this talk, I will present the various studies I have conducted during my PhD and at the Cambridge Affective Intelligence and Robotics Lab to explore how robots can help assess and rehabilitate children’s communication skills and mental well-being. More specifically, I will provide both quantitative and qualitative results and findings from (i) an exploratory study with children with autism and global developmental disorders to investigate the use of intelligent personal assistants in therapy; (ii) an empirical study involving children with and without language disorders interacting with a physical robot, a virtual agent, and a human counterpart to assess their linguistic skills; (iii) an 8-week longitudinal study involving children with autism and language disorders who interacted either with a physical or a virtual robot to rehabilitate their linguistic skills; and (iv) an empirical study to aid the assessment of mental well-being in children. These findings can inform and help the child-robot interaction community design and develop new adaptive robots to help assess and rehabilitate linguistic skills and mental well-being in children.

SeminarNeuroscience

Lifelong Learning AI via neuro inspired solutions

Hava Siegelmann
University of Massachusetts Amherst
Oct 26, 2022

AI embedded in real systems, such as in satellites, robots and other autonomous devices, must make fast, safe decisions even when the environment changes, or under limitations on the available power; to do so, such systems must be adaptive in real time. To date, edge computing has no real adaptivity – rather the AI must be trained in advance, typically on a large dataset with much computational power needed; once fielded, the AI is frozen: It is unable to use its experience to operate if environment proves outside its training or to improve its expertise; and worse, since datasets cannot cover all possible real-world situations, systems with such frozen intelligent control are likely to fail. Lifelong Learning is the cutting edge of artificial intelligence - encompassing computational methods that allow systems to learn in runtime and incorporate learning for application in new, unanticipated situations. Until recently, this sort of computation has been found exclusively in nature; thus, Lifelong Learning looks to nature, and in particular neuroscience, for its underlying principles and mechanisms and then translates them to this new technology. Our presentation will introduce a number of state-of-the-art approaches to achieve AI adaptive learning, including from the DARPA’s L2M program and subsequent developments. Many environments are affected by temporal changes, such as the time of day, week, season, etc. A way to create adaptive systems which are both small and robust is by making them aware of time and able to comprehend temporal patterns in the environment. We will describe our current research in temporal AI, while also considering power constraints.

SeminarNeuroscience

Faking emotions and a therapeutic role for robots and chatbots: Ethics of using AI in psychotherapy

Bipin Indurkhya
Cognitive Science Department, Jagiellonian University, Kraków
May 18, 2022

In recent years, there has been a proliferation of social robots and chatbots that are designed so that users make an emotional attachment with them. This talk will start by presenting the first such chatbot, a program called Eliza designed by Joseph Weizenbaum in the mid 1960s. Then we will look at some recent robots and chatbots with Eliza-like interfaces and examine their benefits as well as various ethical issues raised by deploying such systems.

SeminarOpen SourceRecording

Open-source neurotechnologies for imaging cortex-wide neural activity in behaving animals

Suhasa Kodandaramaiah
University of Minnesota
May 3, 2022

Neural computations occurring simultaneously in multiple cerebral cortical regions are critical for mediating behaviors. Progress has been made in understanding how neural activity in specific cortical regions contributes to behavior. However, there is a lack of tools that allow simultaneous monitoring and perturbing neural activity from multiple cortical regions. We have engineered a suite of technologies to enable easy, robust access to much of the dorsal cortex of mice for optical and electrophysiological recordings. First, I will describe microsurgery robots that can programmed to perform delicate microsurgical procedures such as large bilateral craniotomies across the cortex and skull thinning in a semi-automated fashion. Next, I will describe digitally designed, morphologically realistic, transparent polymer skulls that allow long-term (+300 days) optical access. These polymer skulls allow mesoscopic imaging, as well as cellular and subcellular resolution two-photon imaging of neural structures up to 600 µm deep. We next engineered a widefield, miniaturized, head-mounted fluorescence microscope that is compatible with transparent polymer skull preparations. With a field of view of 8 × 10 mm2 and weighing less than 4 g, the ‘mini-mScope’ can image most of the mouse dorsal cortex with resolutions ranging from 39 to 56 µm. We used the mini-mScope to record mesoscale calcium activity across the dorsal cortex during sensory-evoked stimuli, open field behaviors, social interactions and transitions from wakefulness to sleep.

SeminarNeuroscience

Why would we need Cognitive Science to develop better Collaborative Robots and AI Systems?

Dorothea Koert
Technical Universtiy Darmstadt
Dec 14, 2021

While classical industrial robots are mostly designed for repetitive tasks, assistive robots will be challenged by a variety of different tasks in close contact with humans. Hereby, learning through the direct interaction with humans provides a potentially powerful tool for an assistive robot to acquire new skills and to incorporate prior human knowledge during the exploration of novel tasks. Moreover, an intuitive interactive teaching process may allow non-programming experts to contribute to robotic skill learning and may help to increase acceptance of robotic systems in shared workspaces and everyday life. In this talk, I will discuss recent research I did on interactive robot skill learning and the remaining challenges on the route to human-centered teaching of assistive robots. In particular, I will also discuss potential connections and overlap with cognitive science. The presented work covers learning a library of probabilistic movement primitives from human demonstrations, intention aware adaptation of learned skills in shared workspaces, and multi-channel interactive reinforcement learning for sequential tasks.

SeminarNeuroscience

Advancing Brain-Computer Interfaces by adopting a neural population approach

Juan Alvaro Gallego
Imperial College London
Nov 29, 2021

Brain-computer interfaces (BCIs) have afforded paralysed users “mental control” of computer cursors and robots, and even of electrical stimulators that reanimate their own limbs. Most existing BCIs map the activity of hundreds of motor cortical neurons recorded with implanted electrodes into control signals to drive these devices. Despite these impressive advances, the field is facing a number of challenges that need to be overcome in order for BCIs to become widely used during daily living. In this talk, I will focus on two such challenges: 1) having BCIs that allow performing a broad range of actions; and 2) having BCIs whose performance is robust over long time periods. I will present recent studies from our group in which we apply neuroscientific findings to address both issues. This research is based on an emerging view about how the brain works. Our proposal is that brain function is not based on the independent modulation of the activity of single neurons, but rather on specific population-wide activity patters —which mathematically define a “neural manifold”. I will provide evidence in favour of such a neural manifold view of brain function, and illustrate how advances in systems neuroscience may be critical for the clinical success of BCIs.

SeminarNeuroscienceRecording

Embodied Artificial Intelligence: Building brain and body together in bio-inspired robots

Fumiya Iida
Department of Engineering
Nov 15, 2021

TBC

SeminarNeuroscienceRecording

Swarms for people

Sabine Hauert
University of Bristol
Oct 7, 2021

As tiny robots become individually more sophisticated, and larger robots easier to mass produce, a breakdown of conventional disciplinary silos is enabling swarm engineering to be adopted across scales and applications, from nanomedicine to treat cancer, to cm-sized robots for large-scale environmental monitoring or intralogistics. This convergence of capabilities is facilitating the transfer of lessons learned from one scale to the other. Cm-sized robots that work in the 1000s may operate in a way similar to reaction-diffusion systems at the nanoscale, while sophisticated microrobots may have individual capabilities that allow them to achieve swarm behaviour reminiscent of larger robots with memory, computation, and communication. Although the physics of these systems are fundamentally different, much of their emergent swarm behaviours can be abstracted to their ability to move and react to their local environment. This presents an opportunity to build a unified framework for the engineering of swarms across scales that makes use of machine learning to automatically discover suitable agent designs and behaviours, digital twins to seamlessly move between the digital and physical world, and user studies to explore how to make swarms safe and trustworthy. Such a framework would push the envelope of swarm capabilities, towards making swarms for people.

SeminarNeuroscience

Brain-Machine Interfaces: Beyond Decoding

José del R. Millán
University of Texas at Austin
Sep 15, 2021

A brain-machine interface (BMI) is a system that enables users to interact with computers and robots through the voluntary modulation of their brain activity. Such a BMI is particularly relevant as an aid for patients with severe neuromuscular disabilities, although it also opens up new possibilities in human-machine interaction for able-bodied people. Real-time signal processing and decoding of brain signals are certainly at the heart of a BMI. Yet, this does not suffice for subjects to operate a brain-controlled device. In the first part of my talk I will review some of our recent studies, most involving participants with severe motor disabilities, that illustrate additional principles of a reliable BMI that enable users to operate different devices. In particular, I will show how an exclusive focus on machine learning is not necessarily the solution as it may not promote subject learning. This highlights the need for a comprehensive mutual learning methodology that foster learning at the three critical levels of the machine, subject and application. To further illustrate that BMI is more than just decoding, I will discuss how to enhance subject learning and BMI performance through appropriate feedback modalities. Finally, I will show how these principles translate to motor rehabilitation, where in a controlled trial chronic stroke patients achieved a significant functional recovery after the intervention, which was retained 6-12 months after the end of therapy.

SeminarNeuroscienceRecording

A role for dopamine in value-free learning

Luke Coddington
Dudman lab, HHMI Janelia
Jul 13, 2021

Recent success in training artificial agents and robots derives from a combination of direct learning of behavioral policies and indirect learning via value functions. Policy learning and value learning employ distinct algorithms that depend upon evaluation of errors in performance and reward prediction errors, respectively. In mammals, behavioral learning and the role of mesolimbic dopamine signaling have been extensively evaluated with respect to reward prediction errors; but there has been little consideration of how direct policy learning might inform our understanding. I’ll discuss our recent work on classical conditioning in naïve mice (https://www.biorxiv.org/content/10.1101/2021.05.31.446464v1) that provides multiple lines of evidence that phasic dopamine signaling regulates policy learning from performance errors in addition to its well-known roles in value learning. This work points towards new opportunities for unraveling the mechanisms of basal ganglia control over behavior under both adaptive and maladaptive learning conditions.

SeminarNeuroscienceRecording

Technologies for large scale cortical imaging and electrophysiology

Suhasa Kodandaramaiah
University of Minnesota
Jun 21, 2021

Neural computations occurring simultaneously in multiple cerebral cortical regions are critical for mediating behaviors. Progress has been made in understanding how neural activity in specific cortical regions contributes to behavior. However, there is a lack of tools that allow simultaneous monitoring and perturbing neural activity from multiple cortical regions. We have engineered a suite of technologies to enable easy, robust access to much of the dorsal cortex of mice for optical and electrophysiological recordings. First, I will describe microsurgery robots that can programmed to perform delicate microsurgical procedures such as large bilateral craniotomies across the cortex and skull thinning in a semi-automated fashion. Next, I will describe digitally designed, morphologically realistic, transparent polymer skulls that allow long-term (>300 days) optical access. These polymer skulls allow mesoscopic imaging, as well as cellular and subcellular resolution two-photon imaging of neural structures up to 600 µm deep. We next engineered a widefield, miniaturized, head-mounted fluorescence microscope that is compatible with transparent polymer skull preparations. With a field of view of 8 × 10 mm2 and weighing less than 4 g, the ‘mini-mScope’ can image most of the mouse dorsal cortex with resolutions ranging from 39 to 56 µm. We used the mini-mScope to record mesoscale calcium activity across the dorsal cortex during sensory-evoked stimuli, open field behaviors, social interactions and transitions from wakefulness to sleep.

SeminarNeuroscience

How Memory Guides Value-Based Decisions

Daphna Shohamy
Columbia University
Dec 16, 2020

From robots to humans, the ability to learn from experience turns a rigid response system into a flexible, adaptive one. In this talk, I will discuss emerging findings regarding the neural and cognitive mechanisms by which learning shapes decisions. The lecture will focus on how multiple brain regions interact to support learning, what this means for how memories are built, and the consequences for how decisions are made. Results emerging from this work challenge the traditional view of separate learning systems and advance understanding of how memory biases decisions in both adaptive and maladaptive ways.

SeminarPhysics of LifeRecording

Surprises in self-deforming self-propelling systems

Daniel Goldman
Georgia Institute of Technology
Nov 17, 2020

From slithering snakes, to entangling robots, self-deforming (shape changing) active systems display surprising dynamics. This is particularly true when such systems interact with environments or other agents to generate self-propulsion (movement). In this talk, I will discuss a few projects from my group illustrating unexpected effects in individual and collectives of self-deformers. For example, snakes and snake-like robots mechanically “diffract” from fixed environmental heterogeneities, collections of smart-active robots (smarticles) can locomote (and phototax) as a collective despite individual immobility, and geometrically actively entangling ensembles of blackworms and robots can self-propel as a unit to thermo or phototax without centralized control.

SeminarNeuroscienceRecording

Leveraging neural manifolds to advance brain-computer interfaces

Juan Álvaro Gallego
Imperial College London
Oct 8, 2020

Brain-computer interfaces (BCIs) have afforded paralysed users “mental control” of computer cursors and robots, and even of electrical stimulators that reanimate their own limbs. Most existing BCIs map the activity of hundreds of motor cortical neurons recorded with implanted electrodes into control signals to drive these devices. Despite these impressive advances, the field is facing a number of challenges that need to be overcome in order for BCIs to become widely used during daily living. In this talk, I will focus on two such challenges: 1) having BCIs that allow performing a broad range of actions; and 2) having BCIs whose performance is robust over long time periods. I will present recent studies from our group in which we apply neuroscientific findings to address both issues. This research is based on an emerging view about how the brain works. Our proposal is that brain function is not based on the independent modulation of the activity of single neurons, but rather on specific population-wide activity patters —which mathematically define a “neural manifold”. I will provide evidence in favour of such a neural manifold view of brain function, and illustrate how advances in systems neuroscience may be critical for the clinical success of BCIs.

SeminarNeuroscienceRecording

Affordable Robots/Computer Systems to Identify, Assess, and Treat Impairment After Brain Injury

Michelle Johnson
University of Pennsylvania, Department of Physical Medicine and Rehabilitation and Department of BioEngineering
Oct 6, 2020

Non-traumatic brain injury due to stroke, cerebral palsy and HIV often result in serious long-term disability worldwide, affecting more than 150 million persons globally; with the majority of persons living in low and middle income countries. These diseases often result in varying levels of motor and cognitive impairment due to brain injury which then affects the person’s ability to complete activities of daily living and fully participate in society. Increasingly advanced technologies are being used to support identification, diagnosis, assessment, and therapy for patients with brain injury. Specifically, robot and mechatronic systems can provide patients, physicians and rehabilitation clinical providers with additional support to care for and improve the quality of life of children and adults with motor and cognitive impairment. This talk will provide a brief introduction to the area of rehabilitation robotics and, via case studies, illustrate how computer/technology-assisted rehabilitation systems can be developed and used to assess motor and cognitive impairment, detect early evidence of functional impairment, and augment therapy in high and low-resource settings.

SeminarNeuroscienceRecording

What can we further learn from the brain for artificial intelligence?

Kenji Doya
Okinawa Institute of Science and Technology
Sep 10, 2020

Deep learning is a prime example of how brain-inspired computing can benefit development of artificial intelligence. But what else can we learn from the brain for bringing AI and robotics to the next level? Energy efficiency and data efficiency are the major features of the brain and human cognition that today’s deep learning has yet to deliver. The brain can be seen as a multi-agent system of heterogeneous learners using different representations and algorithms. The flexible use of reactive, model-free control and model-based “mental simulation” appears to be the basis for computational and data efficiency of the brain. How the brain efficiently acquires and flexibly combines prediction and control modules is a major open problem in neuroscience and its solution should help developments of more flexible and autonomous AI and robotics.