robotics
Pick a domain context
This cross-domain view is for discovery. Choose a domain-scoped topic page for the canonical URL.
Prof. Jim Torresen
The Department of Informatics at the University of Oslo, Norway is looking for candidates to fill two permanent positions as Associate Professors in Machine Learning. The positions can be affiliated to or interact with the Robotics and Intelligent Systems (ROBIN) group at the University. Candidates with a background in artificial intelligence/machine learning related to robotics or embedded systems are encouraged to apply. The candidates will be evaluated with respect to two different profiles: 1. Associate Professor in Ethical Considerations in Machine Learning: For this position, we are looking for a candidate with a research background in machine learning including applications and a track record in analysing aspects of machine learning methodology related to ethical considerations. 2. Associate Professor in Machine Learning: This position is expected to be offered to a candidate with a strong research background in machine learning including applications. Please note that this position is announced in Norwegian and with a requirement for candidates to have fluent oral and written communication skills in both English and a Scandinavian language.
N/A
The Faculty of Computer Science of HSE University invites applications for full-time, tenure-track positions of Assistant Professor in all areas of computer science including but not limited to artificial intelligence, machine learning, computer vision, programming language theory, software engineering, system programming, algorithms, computation complexity, distributed and parallel computation, bioinformatics, human-computer interaction, and robotics. The successful candidate is expected to conduct high-quality research publishable in reputable peer-reviewed journals with research support provided by the University.
Prof. Dr. Verena V Hafner
The EU project METATOOL aims to provide a computational model of synthetic awareness to enhance adaptation and achieve tool invention. This will enable a robot to monitor and self-evaluate its performance, ground and reuse this information for adapting to new circumstances, and finally unlock the possibility of creating new tools. Under the predictive account of awareness, and based on both neuroscientific and archeological evidence, we will develop a novel computational model of metacognition based on predictive processing (metaprediction) and validate its utility in real robots in two use case scenarios: conditional sequential tasks and tool creation. At Humboldt-Universität zu Berlin, we will develop and investigate computational models for tool-use and tool invention based on predictive processing paradigms. The models will be evaluated and implemented in robots interacting with tools in a real physical environment.
Prof. Jim Torresen
The goal of the position is to create prediction methods for proactive planning of future robot actions and to design robot acting mechanisms for adaptive response ranging from quick and intuitive to slower well-reasoned. We combine sensing across multiple modalities with learned knowledge to predict outcomes and choose the best actions. The goal is to transfer these skills to human-robot interaction in home scenarios, including the support of everyday tasks and physical rehabilitation. Thus, it is relevant to work with implementation and research within robot perception and control for the robot tasks. User studies through human-robot interaction experiments are to be performed. A PhD fellow and a researcher are already hired for the project and will complement in performing the above outlined research.
Dr. D.M. Lyons
Fordham University (New York City) has developed a unique Ph.D. program in Computer Science, tuned to the latest demands and opportunities of the field. Upon completion of the Ph.D. in Computer Science program, students will be able to demonstrate the fundamental, analytical, and computational knowledge and methodology needed to conduct original research and practical experiments in the foundations and theory of computer science, in software and systems, and in informatics and data analytics. They will also be able to apply computing and informatics methods and techniques to understand, analyze, and solve a variety of significant, real-world problems and issues in the cyber, physical, and social domains. Furthermore, they will be able to conduct original, high-quality, ethically informed, scientific research and publish in respected, peer-reviewed, journals and conferences. Lastly, they will be able to effectively instruct others in a variety of topics in Computer Science at the university level, addressing ethics, justice, diversity, and sustainability. This training and education means that graduates can pursue careers at the university level, but also research and leadership positions in industry and government and within the leading technology companies. A hallmark of the program is early involvement in research, within the first two years of the program. Students will have the opportunity to carry out research in machine learning and AI/robotics, big data analytics and informatics, data and information fusion, information and cyber security, and software engineering and systems.
NGUYEN Sao Mai
This internship studies the applications of Hierarchical Reinforcement Learning methods in robotics: Deploying autonomous robots in real world environments typically introduces multiple difficulties among which is the size of the observable space and the length of the required tasks. Reinforcement Learning typically helps agents solve decision making problems by autonomously discovering successful behaviours and learning them. But these methods are known to struggle with long and complex tasks. Hierarchical Reinforcement Learning extend this paradigm to decompose these problems into easier subproblems with High-level agents determining which subtasks need to be accomplished, and Low-level agent learning to achieve them. During this internship, the intern will : Get acquainted with the state of art in Hierarchical Reinforcement Learning including the most notable algorithms [1, 2, 3], the challenges they solve and their limitations. Reimplement some of these approaches and validate their results in robotics simulated environments such as iGibson [4]. Establish an experimental comparison of these methods with respect to some research hypothesis. The intern is expected to also collaborate with a PhD student whose work is closely related to this topic.
N/A
The Institute of Robotics and Cognitive Systems at the University of Lübeck has a vacancy for an Assistant Professorship (Juniorprofessur) Tenure Track W2 for Robotics for an initial period of three years with an option to extend for a further three years. The future holder of the position should represent the field of robotics in research and teaching. Furthermore, the holder of the professorship shall establish their own working group at the Institute of Robotics and Cognitive Systems. The future holder of the position should have a very good doctorate and demonstrable scientific experience in one or more of the following research areas: Modelling, simulation, and control of robots, Robot kinematics and dynamics, Robot sensor technology, e.g., force and moment sensor technology, Robotic systems, e.g., telerobotic systems, humanoid robots, etc., Soft robotics and continuum robotics, AI and machine learning methods in robotics, Human-robot collaboration and safe autonomous robot systems, AR/VR in robotics, Applications of AI and robotics in medicine. The range of tasks also includes the acquisition of third-party funds and the assumption of project management. The applicant is expected to be scientifically involved in the research focus areas of the institute and the profile areas of the university, especially in the context of projects acquired by the institute itself (public funding, industrial cooperations, etc.). The position holder is expected to be willing to cooperate with the “Lübeck Innovation Hub for Robotic Surgery” (LIROS), the 'Center for Doctoral Studies Lübeck' and the 'Open Lab for Robotics and Imaging in Industry and Medicine' (OLRIM). In teaching, participation in the degree programme 'Robotics and Autonomous Systems' (German-language Bachelor’s, English-language Master’s) as well as the other degree programmes of the university’s STEM sections is expected.
Prof. Jim Torresen
The goal of the position is to create prediction methods for proactive planning of future robot actions and to design robot acting mechanisms for adaptive response ranging from quick and intuitive to slower well-reasoned. We combine sensing across multiple modalities with learned knowledge to predict outcomes and choose the best actions. The goal is to transfer these skills to human-robot interaction in home scenarios, including the support of everyday tasks and physical rehabilitation. Thus, it is relevant to work with implementation and research within robot perception and control for the robot tasks. User studies through human-robot interaction experiments are to be performed. A PhD fellow and a researcher are already hired for the project and will complement in performing the above outlined research.
Jun.-Prof. Dr.-Ing. Rania Rayyes
The main focus of this position is to develop novel AI systems and methods for robot applications: Dexterous robot grasping, Human-robot learning, Transfer learning – efficient online learning. The role offers close cooperation with other institutes, universities, and numerous industrial partners, a self-determined development environment for own research topics with active support for the doctorate research project, flexible working hours, and work in a young, interdisciplinary research team.
Thomas Nowotny
You will develop novel active AI algorithms that are inspired by the rapid and robust learning of insects within the £1.2m EPSRC International Centre to Centre Collaboration project: “ActiveAI: active learning and selective attention for rapid, robust and efficient AI.” and will work in collaboration with the University of Sheffield and world-leading neuroscientists in Australia. Your primary role will be to develop a new class of ActiveAI controllers for problems in which insects excel but deep learning methods struggle. These problems have one or more of the following characteristics: (i) learning must occur rapidly, (ii) learning samples are few or costly, (iii) computational resources are limited, and (iv) the learning problem changes over time. Insects deal with such complex tasks robustly despite limited computational power because learning is an active process emerging from the interaction of evolved brains, bodies and behaviours. Through a virtuous cycle of modelling and experiments, you will develop insect-inspired models, in which behavioural strategies and specialised sensors actively structure sensory input while selective attention drives learning to the most salient information. The cycle of modelling and experiments will be achieved through field work in both Sussex and Australia.
Silvio P. Sabatini
The project aims to implement neuromorphic multi-layer networks of leaky integrate-and-fire (LIF) neurons in cascade to a motorized event-based camera (DAVIS, DVS, https://inivation.com/technology/), to obtain artificial replicas of the early stages of an active vision system. Testing the models will involve the assessment of multiple and varying parameters captured under real-life and adaptive conditions. At functional level, the system will (1) consider the neural resources required to account for a range of linear/nonlinear early visual processes, and (2) provide the inference engines for relating the resulting visual representations to performance on psychophysical tasks. The visual performance of the resulting silicon model will be comparatively assessed with that of a typical human observer. The objective is twofold: on the one hand, we contribute a deeper understanding of visual processes, especially about predicting how early computation may reverberate through the sensory pathways eventually contributing to functional vision. On the other hand, we contribute to the definition of a new generation of perceptual machines to be used in robotics and in general in newly developed Artificial Intelligence systems.
Vito Trianni, Ph.D.
Two two-years Research Assistant positions are available at the Institute of Cognitive Sciences and Technologies, Italian National Research Council, starting as early as February 2023. The selected candidates will have the opportunity to work on the research track of HACID (http://hacid-project.eu/), which is an is an HORIZON Innovation Action, a collaborative project funded under the Horizon Europe Programme, within the topic 'AI, Data and Robotics at work'. HACID develops a novel hybrid collective intelligence for decision support to professionals facing complex open-ended problems, promoting engagement, fairness and trust. The focus of these fellowships is design and development of knowledge graphs and collective intelligence methods in the context of two application domains: medical diagnostics and decision support for climate change adaptation policies.
N/A
The KINDI Center for Computing Research at the College of Engineering in Qatar University is seeking high-caliber candidates for a research faculty position at the level of assistant professor in the area of artificial intelligence (AI). The applicant should possess a Ph.D. degree in Computer Science or Computer Engineering or related fields from an internationally recognized university and should demonstrate an outstanding research record in AI and related subareas (e.g., machine/deep learning (ML/DL), computer vision, robotics, natural language processing, etc.) and fields (e.g., data science, big data analytics, etc.). Candidates with good hands-on experience are preferred. The position is available immediately.
N/A
The position integrates into an attractive environment of existing activities in artificial intelligence such as machine learning for robotics and computer vision, natural language processing, recommender systems, schedulers, virtual and augmented reality, and digital forensics. The candidate should engage in research and teaching in the general area of artificial intelligence. Examples of possible foci include machine learning for pattern recognition, prediction and decision making, data-driven, adaptive, learning and self-optimizing systems, explainable and transparent AI, representation learning; generative models, neuro-symbolic AI, causality, distributed/decentralized learning, environmentally-friendly, sustainable, data-efficient, privacy-preserving AI, neuromorphic computing and hardware aspects, knowledge representations, reasoning, ontologies. Cooperations with research groups at the Department of Computer Science, the Research Areas and in particular the Digital Science Center of the University as well as with business, industry and international research institutions are expected. The candidate should reinforce or complement existing strengths of the Department of Computer Science.
N/A
The AI Department of the Donders Centre for Cognition (DCC), embedded in the Donders Institute for Brain, Cognition and Behaviour, and the School of Artificial Intelligence at Radboud University Nijmegen are looking for a researcher in reinforcement learning with an emphasis on safety and robustness, an interest in natural computing as well as in applications in neurotechnology and other domains such as robotics, healthcare and/or sustainability. You will be expected to perform top-quality research in (deep) reinforcement learning, actively contribute to the DBI2 consortium, interact and collaborate with other researchers and specialists in academia and/or industry, and be an inspiring member of our staff with excellent communication skills. You are also expected to engage with students through teaching and master projects not exceeding 20% of your time.
Helena Dalmau Felderhoff
The Max Planck Institutes for Biological Cybernetics and Intelligent Systems as well as the AI Center in Tübingen & Stuttgart (Germany) offer up to 10 students at the Bachelor or Master level paid three-months internships during the summer of 2024. Successful applicants will work with top-level scientists on research projects spanning machine learning, electrical engineering, theoretical neuroscience, behavioral experiments, robotics and data analysis. The CaCTüS Internship is aimed at young scientists who are held back by personal, financial, regional or societal constraints to help them develop their research careers and gain access to first-class education. The program is designed to foster inclusion, diversity, equity and access to excellent scientific facilities. We specifically encourage applications from students living in low- and middle-income countries which are currently underrepresented in the Max Planck Society research community.
Angelo Cangelosi
Two Research Fellows in Robotics / Human Robot Interaction are required for a period of 12 months each to work on the UKRI/EPSRC project “Trustworthy Autonomous Systems Node on Trust”. This is a collaborative project of the University of Manchester’s Cognitive Robotics Lab with Heriot-Watt University Edinburgh and Imperial College London. The candidates will carry out research on robot cognitive architectures for theory of mind and trust, using a combination of machine learning and robotics methodologies, and/or human-robot interaction for trust. The research fellows will be working collaboratively as part of the Cognitive Robotics Lab at the Department of Computer Science at the University of Manchester under the supervision of Professor Angelo Cangelosi. Close collaboration with the other project partners will also be required.
Burcu Ayşen Ürgen
Bilkent University invites applications for multiple open-rank faculty positions in the Department of Neuroscience. The department plans to expand research activities in certain focus areas and accordingly seeks applications from promising or established scholars who have worked in the following or related fields: Cellular/molecular/developmental neuroscience with a strong emphasis on research involving animal models. Systems/cognitive/computational neuroscience with a strong emphasis on research involving emerging data-driven approaches, including artificial intelligence, robotics, brain-machine interfaces, virtual reality, computational imaging, and theoretical modeling. Candidates with a research focus in those areas whose research has a neuroimaging component are particularly encouraged to apply. The Department’s interdisciplinary Graduate Program in Neuroscience that offers Master's and PhD degrees was established in 2014. The department is affiliated with Bilkent’s Aysel Sabuncu Brain Research Center (ASBAM) and the National Magnetic Resonance Research Center (UMRAM). Faculty affiliated with the department has the privilege to access state-of-the-art research facilities in these centers, including animal facilities, cellular/molecular laboratory infrastructure, psychophysics laboratories, eyetracking laboratories, EEG laboratories, a human-robot interaction laboratory, and two MRI scanners (3T and 1.5T).
Professor Peter Stone
Texas Robotics at the University of Texas at Austin invites applications for tenure-track faculty positions. Outstanding candidates in all areas of Robotics will be considered. Tenure-track positions require a Ph.D. or equivalent degree in a relevant area at the time of employment. Successful candidates are expected to pursue an active research program, to teach both graduate and undergraduate courses, and to supervise students in research. The University is fully committed to building a world-class faculty and we welcome candidates who resonate with our core values of learning, discovery, freedom, leadership, individual opportunity, and responsibility. Candidates who are committed to broadening participation in robotics, at all levels, are strongly encouraged.
Antonio C. Roque
The Research, Innovation and Dissemination Center for Neuromathematics (NeuroMat), hosted by the University of São Paulo (USP), Brazil, and funded by the São Paulo Research Foundation (FAPESP), is offering post-doctoral fellowships for recent PhDs with outstanding research potential. The fellowship will involve collaborations with research teams and laboratories associated with NeuroMat. The research to be developed by the post-doc fellow shall be strictly related to ongoing research lines developed by the NeuroMat team that can be consulted at our website. The project may be developed at the laboratories of USP, campuses of São Paulo or Ribeirão Preto, or at UNICAMP, Campinas, in person.
N/A
PAVIS is looking to strengthen its activities on 3D multi-modal scene understanding. The research will focus on novel ML and CV methods that efficiently incorporate priors and constraints from world physical models and semantic priors, derived from vision, language models, or other modalities. The project will explore the interplay between vision and large language models to address tasks in 3D reasoning, visual (re-)localization, active vision, and neural/geometrical novel view rendering. The aim is to develop models applicable to interdisciplinary research, including drug discovery and robotics, utilizing in-house robotics platforms and HPC computational facilities.
Thilo Stadelmann
Professor of robotics and director for the Institute of Mechantronic Systems at the Zurich University of Applied Sciences.
Ahmed H. Qureshi
The Cognitive Robot Autonomy and Learning (CoRAL) Lab, led by Professor Ahmed Qureshi, has an immediate opening for a postdoctoral position focused on physics-informed methods for scaling deep reinforcement learning to complex dynamical systems. The position is initially open for one year and will be renewed annually based on performance. The candidate will have the opportunity to publish and present papers at top conferences such as RSS, ICRA, ICML, and NeurIPS and collaborate with an outstanding team of graduate and undergraduate students. The Cognitive Robot Autonomy and Learning (CoRAL) Lab is in the Department of Computer Science at Purdue University. The lab offers an excellent environment for exploring different aspects of robotics, from algorithm development to real-robot implementations.
Dr. Nicola Catenacci Volpi
This PhD project will push the boundaries of Continual Reinforcement Learning by investigating how agents can continuously learn and adapt over time, how they can autonomously develop and flexibly apply an ever-expanding repertoire of skills across various tasks, and what representations allow them to do this efficiently. The project aims to create AI systems that can sustain autonomous learning and adaptation in ever-changing environments with limited computational resources. The selected candidate will master and contribute to techniques in deep reinforcement learning, incorporating principles from probabilistic machine learning, such as information theory, intrinsic motivation, and open-ended learning frameworks. The project may use computer games as benchmarking tools or apply findings to robotic systems, including manipulators, intelligent autonomous vehicles, and humanoid robots.
Michael Schmuker
We have two Research Fellow positions available, funded through the NSF/MRC NeuroNex grant “Odor to Action”, an international consortium of 16 world-leading research groups from the US, Canada, and the UK. The consortium will apply their joint expertise in computational and experimental neuroscience, behavioural studies, fluid dynamics, and fragrance chemistry to unlock how brains use natural stimuli to generate adaptive natural behaviours. Apply to join an interdisciplinary team of computational neuroscientists, AI researchers and computer scientists to investigate the neuro-computational principles of the sense of smell and translate them into approaches for machine and robot olfaction. Our team will investigate real-time machine olfaction using biologically realistic spiking networks, on neuromorphic hardware, embodied in robots, performing olfactory navigation in real time. We will also use machine learning to analyse experimental data provided by collaborators and to gain a deeper insight into fragrance space. For more information and to apply, please head to https://www.jobs.herts.ac.uk , search for job vacancy 029755. Informal inquiries via email are welcome!
A personal journey on understanding intelligence
The focus of this talk is not about my research in AI or Robotics but my own journey on trying to do research and understand intelligence in a rapidly evolving research landscape. I will trace my path from conducting early-stage research during graduate school, to working on practical solutions within a startup environment, and finally to my current role where I participate in more structured research at a major tech company. Through these varied experiences, I will provide different perspectives on research and talk about how my core beliefs on intelligence have changed and sometimes even been compromised. There are no lessons to be learned from my stories, but hopefully they will be entertaining.
A Comprehensive Overview of Large Language Models
Large Language Models (LLMs) have recently demonstrated remarkable capabilities in natural language processing tasks and beyond. This success of LLMs has led to a large influx of research contributions in this direction. These works encompass diverse topics such as architectural innovations, better training strategies, context length improvements, fine-tuning, multi-modal LLMs, robotics, datasets, benchmarking, efficiency, and more. With the rapid development of techniques and regular breakthroughs in LLM research, it has become considerably challenging to perceive the bigger picture of the advances in this direction. Considering the rapidly emerging plethora of literature on LLMs, it is imperative that the research community is able to benefit from a concise yet comprehensive overview of the recent developments in this field. This article provides an overview of the existing literature on a broad range of LLM-related concepts. Our self-contained comprehensive overview of LLMs discusses relevant background concepts along with covering the advanced topics at the frontier of research in LLMs. This review article is intended to not only provide a systematic survey but also a quick comprehensive reference for the researchers and practitioners to draw insights from extensive informative summaries of the existing works to advance the LLM research.
Why robots? A brief introduction to the use of robots in psychological research
Why should psychologists be interested in robots? This talk aims to illustrate how social robots – machines with human-like features and behaviors – can offer interesting insights into the human mind. I will first provide a brief overview of how robots have been used in psychology and cognitive science research focusing on two approaches - Developmental Robotics and Human-Robot Interaction (HRI). We will then delve into recent works in HRI, including my own, in greater detail. We will also address the limitations of research thus far, such as the lack of proper controlled experiments, and discuss how the scientific community should evaluate the use of technology in educational and other social settings.
Children-Agent Interaction For Assessment and Rehabilitation: From Linguistic Skills To Mental Well-being
Socially Assistive Robots (SARs) have shown great potential to help children in therapeutic and healthcare contexts. SARs have been used for companionship, learning enhancement, social and communication skills rehabilitation for children with special needs (e.g., autism), and mood improvement. Robots can be used as novel tools to assess and rehabilitate children’s communication skills and mental well-being by providing affordable and accessible therapeutic and mental health services. In this talk, I will present the various studies I have conducted during my PhD and at the Cambridge Affective Intelligence and Robotics Lab to explore how robots can help assess and rehabilitate children’s communication skills and mental well-being. More specifically, I will provide both quantitative and qualitative results and findings from (i) an exploratory study with children with autism and global developmental disorders to investigate the use of intelligent personal assistants in therapy; (ii) an empirical study involving children with and without language disorders interacting with a physical robot, a virtual agent, and a human counterpart to assess their linguistic skills; (iii) an 8-week longitudinal study involving children with autism and language disorders who interacted either with a physical or a virtual robot to rehabilitate their linguistic skills; and (iv) an empirical study to aid the assessment of mental well-being in children. These findings can inform and help the child-robot interaction community design and develop new adaptive robots to help assess and rehabilitate linguistic skills and mental well-being in children.
Experimental Neuroscience Bootcamp
This course provides a fundamental foundation in the modern techniques of experimental neuroscience. It introduces the essentials of sensors, motor control, microcontrollers, programming, data analysis, and machine learning by guiding students through the “hands on” construction of an increasingly capable robot. In parallel, related concepts in neuroscience are introduced as nature’s solution to the challenges students encounter while designing and building their own intelligent system.
Growing Up in Academia with Emily Cross
Interdisciplinary College
The Interdisciplinary College is an annual spring school which offers a dense state-of-the-art course program in neurobiology, neural computation, cognitive science/psychology, artificial intelligence, machine learning, robotics and philosophy. It is aimed at students, postgraduates and researchers from academia and industry. This year's focus theme "Flexibility" covers (but not be limited to) the nervous system, the mind, communication, and AI & robotics. All this will be packed into a rich, interdisciplinary program of single- and multi-lecture courses, and less traditional formats.
Why would we need Cognitive Science to develop better Collaborative Robots and AI Systems?
While classical industrial robots are mostly designed for repetitive tasks, assistive robots will be challenged by a variety of different tasks in close contact with humans. Hereby, learning through the direct interaction with humans provides a potentially powerful tool for an assistive robot to acquire new skills and to incorporate prior human knowledge during the exploration of novel tasks. Moreover, an intuitive interactive teaching process may allow non-programming experts to contribute to robotic skill learning and may help to increase acceptance of robotic systems in shared workspaces and everyday life. In this talk, I will discuss recent research I did on interactive robot skill learning and the remaining challenges on the route to human-centered teaching of assistive robots. In particular, I will also discuss potential connections and overlap with cognitive science. The presented work covers learning a library of probabilistic movement primitives from human demonstrations, intention aware adaptation of learned skills in shared workspaces, and multi-channel interactive reinforcement learning for sequential tasks.
NMC4 Short Talk: Brain-inspired spiking neural network controller for a neurorobotic whisker system
It is common for animals to use self-generated movements to actively sense the surrounding environment. For instance, rodents rhythmically move their whiskers to explore the space close to their body. The mouse whisker system has become a standard model to study active sensing and sensorimotor integration through feedback loops. In this work, we developed a bioinspired spiking neural network model of the sensorimotor peripheral whisker system, modelling trigeminal ganglion, trigeminal nuclei, facial nuclei, and central pattern generator neuronal populations. This network was embedded in a virtual mouse robot, exploiting the Neurorobotics Platform, a simulation platform offering a virtual environment to develop and test robots driven by brain-inspired controllers. Eventually, the peripheral whisker system was properly connected to an adaptive cerebellar network controller. The whole system was able to drive active whisking with learning capability, matching neural correlates of behaviour experimentally recorded in mice.
Embodied Artificial Intelligence: Building brain and body together in bio-inspired robots
TBC
StereoSpike: Depth Learning with a Spiking Neural Network
Depth estimation is an important computer vision task, useful in particular for navigation in autonomous vehicles, or for object manipulation in robotics. Here we solved it using an end-to-end neuromorphic approach, combining two event-based cameras and a Spiking Neural Network (SNN) with a slightly modified U-Net-like encoder-decoder architecture, that we named StereoSpike. More specifically, we used the Multi Vehicle Stereo Event Camera Dataset (MVSEC). It provides a depth ground-truth, which was used to train StereoSpike in a supervised manner, using surrogate gradient descent. We propose a novel readout paradigm to obtain a dense analog prediction –the depth of each pixel– from the spikes of the decoder. We demonstrate that this architecture generalizes very well, even better than its non-spiking counterparts, leading to state-of-the-art test accuracy. To the best of our knowledge, it is the first time that such a large-scale regression problem is solved by a fully spiking network. Finally, we show that low firing rates (<10%) can be obtained via regularization, with a minimal cost in accuracy. This means that StereoSpike could be implemented efficiently on neuromorphic chips, opening the door for low power real time embedded systems.
Neuropunk revolution and its implementation via real-time neurosimulations and their integrations
In this talk I present the perspectives of the "neuropunk revolution'' technologies. One could understand the "neuropunk revolution'' as the integration of real-time neurosimulations into biological nervous/motor systems via neurostimulation or artificial robotic systems via integration with actuators. I see the added value of the real-time neurosimulations as bridge technology for the set of developed technologies: BCI, neuroprosthetics, AI, robotics to provide bio-compatible integration into biological or artificial limbs. Here I present the three types of integration of the "neuropunk revolution'' technologies as inbound, outbound and closed-loop in-outbound systems. I see the shift of the perspective of how we see now the set of technologies including AI, BCI, neuroprosthetics and robotics due to the proposed concept for example the integration of external to a body simulated part of the nervous system back into the biological nervous system or muscles.
Technologies for large scale cortical imaging and electrophysiology
Neural computations occurring simultaneously in multiple cerebral cortical regions are critical for mediating behaviors. Progress has been made in understanding how neural activity in specific cortical regions contributes to behavior. However, there is a lack of tools that allow simultaneous monitoring and perturbing neural activity from multiple cortical regions. We have engineered a suite of technologies to enable easy, robust access to much of the dorsal cortex of mice for optical and electrophysiological recordings. First, I will describe microsurgery robots that can programmed to perform delicate microsurgical procedures such as large bilateral craniotomies across the cortex and skull thinning in a semi-automated fashion. Next, I will describe digitally designed, morphologically realistic, transparent polymer skulls that allow long-term (>300 days) optical access. These polymer skulls allow mesoscopic imaging, as well as cellular and subcellular resolution two-photon imaging of neural structures up to 600 µm deep. We next engineered a widefield, miniaturized, head-mounted fluorescence microscope that is compatible with transparent polymer skull preparations. With a field of view of 8 × 10 mm2 and weighing less than 4 g, the ‘mini-mScope’ can image most of the mouse dorsal cortex with resolutions ranging from 39 to 56 µm. We used the mini-mScope to record mesoscale calcium activity across the dorsal cortex during sensory-evoked stimuli, open field behaviors, social interactions and transitions from wakefulness to sleep.
How can we learn from nature to build better polymer composites?
Nature is replete with extraordinary materials that can grow, move, respond, and adapt. In this talk I will describe our ongoing efforts to develop advanced polymeric materials, inspired by nature. First, I will describe my group’s efforts to develop ultrastiff, ultratough materials inspired by the byssal materials of marine mussels. These adhesive contacts allow mussels to secure themselves to rocks, wood, metals and other surfaces in the harsh conditions of the intertidal zone. By developing a foundational understanding of the structure-mechanics relationships and processing of the natural system, we can design high-performance materials that are extremely strong without compromising extensibility, as well as macroporous materials with tunable toughness and strength. In the second half of the talk, I will describe new efforts to exploit light as a means of remote control and power. By leveraging the phototransduction pathways of highly-absorbing, negatively photochromic molecules, we can drive the motion of amorphous polymeric materials as well as liquid flows. These innovations enable applications in packaging, connective tissue repair, soft robotics, and optofluidics.
Brainstorms Festival
The Brainstorms Festival is the No1 online neuroscience and AI event for scientists, businesses, investors and startups. Join and listen to talks from leading scientists, take part in interactive discussions, and network with the people driving neurotech and AI innovation globally. The festival provides a digital playground for visionaries with dozens of medical innovations, panel discussions, workshops, a hackathon, and a neuroethics panel discussion which is crucial topic for neurodiversity and disability rights. Register now and be part of our amazing crowd!
Data-driven Artificial Social Intelligence: From Social Appropriateness to Fairness
Designing artificially intelligent systems and interfaces with socio-emotional skills is a challenging task. Progress in industry and developments in academia provide us a positive outlook, however, the artificial social and emotional intelligence of the current technology is still limited. My lab’s research has been pushing the state of the art in a wide spectrum of research topics in this area, including the design and creation of new datasets; novel feature representations and learning algorithms for sensing and understanding human nonverbal behaviours in solo, dyadic and group settings; designing longitudinal human-robot interaction studies for wellbeing; and investigating how to mitigate the bias that creeps into these systems. In this talk, I will present some of my research team’s explorations in these areas including social appropriateness of robot actions, virtual reality based cognitive training with affective adaptation, and bias and fairness in data-driven emotionally intelligent systems.
Abstraction and Analogy in Natural and Artificial Intelligence
In 1955, John McCarthy and colleagues proposed an AI summer research project with the following aim: “An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” More than six decades later, all of these research topics remain open and actively investigated in the AI community. While AI has made dramatic progress over the last decade in areas such as vision, natural language processing, and robotics, current AI systems still almost entirely lack the ability to form humanlike concepts and abstractions. Some cognitive scientists have proposed that analogy-making is a central mechanism for conceptual abstraction and understanding in humans. Douglas Hofstadter called analogy-making “the core of cognition”, and Hofstadter and co-author Emmanuel Sander noted, “Without concepts there can be no thought, and without analogies there can be no concepts.” In this talk I will reflect on the role played by analogy-making at all levels of intelligence, and on prospects for developing AI systems with humanlike abilities for abstraction and analogy.
Affordable Robots/Computer Systems to Identify, Assess, and Treat Impairment After Brain Injury
Non-traumatic brain injury due to stroke, cerebral palsy and HIV often result in serious long-term disability worldwide, affecting more than 150 million persons globally; with the majority of persons living in low and middle income countries. These diseases often result in varying levels of motor and cognitive impairment due to brain injury which then affects the person’s ability to complete activities of daily living and fully participate in society. Increasingly advanced technologies are being used to support identification, diagnosis, assessment, and therapy for patients with brain injury. Specifically, robot and mechatronic systems can provide patients, physicians and rehabilitation clinical providers with additional support to care for and improve the quality of life of children and adults with motor and cognitive impairment. This talk will provide a brief introduction to the area of rehabilitation robotics and, via case studies, illustrate how computer/technology-assisted rehabilitation systems can be developed and used to assess motor and cognitive impairment, detect early evidence of functional impairment, and augment therapy in high and low-resource settings.
Motility control in biological microswimmers
It is often assumed that biological swimmers conform faithfully to certain stereotypes assigned to them by physicists and mathematicians, when the reality is in fact much more complicated. In this talk we will use a combination of theory, experiments, and robotics, to understand the physical and evolutionary basis of motility control in a number of distinguished organisms. These organisms differ markedly in terms of their size, shape, and arrangement of locomotor appendages, but are united in their use of cilia - the ultimate shape-shifting organelle - to achieve self-propulsion and navigation.
What can we further learn from the brain for artificial intelligence?
Deep learning is a prime example of how brain-inspired computing can benefit development of artificial intelligence. But what else can we learn from the brain for bringing AI and robotics to the next level? Energy efficiency and data efficiency are the major features of the brain and human cognition that today’s deep learning has yet to deliver. The brain can be seen as a multi-agent system of heterogeneous learners using different representations and algorithms. The flexible use of reactive, model-free control and model-based “mental simulation” appears to be the basis for computational and data efficiency of the brain. How the brain efficiently acquires and flexibly combines prediction and control modules is a major open problem in neuroscience and its solution should help developments of more flexible and autonomous AI and robotics.