Title: Learning Complex Robotic Skills via Conditional Neural Movement Primitives
Abstract: Predicting the consequences of one’s own actions is an important requirement for safe human-robot collaboration and its application to personal robotics. Neurophysiological and behavioral data suggest that the human brain benefits from internal forward models that continuously predict the outcomes of the generated motor
commands for trajectory planning, movement control, and multi-step planning. In this talk, I will present our recent Learning from Demonstration framework [1] that is based on Conditioned Neural Processes. CNMPs extract the prior knowledge directly from the training data by sampling observations from it, and use it to predict a conditional distribution over any other target points. CNMPs specifically learn complex temporal multi-modal sensorimotor relations in connection with external parameters and goals; produce movement trajectories in joint or task space; and execute these trajectories through a high-level feedback control loop. Conditioned with an external goal that is encoded in the sensorimotor space of the robot, predicted sensorimotor trajectory that is expected to be observed
during the successful execution of the task is generated by the CNMP, and the corresponding motor commands are executed. After presenting the basic CNMP framework, I will talk about how to form flexible skills combining Learning from Demonstration and Reinforcement Learning via Representation Sharing [2], and the deep modality
blending networks (DMBN) [3], which creates a common latent space from multi-modal experience of a robot by blending multi-modal signals with a stochastic weighting mechanism.
References:
[1] Seker et al. Conditional Neural Movement Primitives, Robotics: Science and Systems (RSS), 2019
[2] Akbulut et al. ACNMP: Flexible Skill Formation with Learning from Demonstration and Reinforcement Learning via Representation Sharing, Conference on Robot Learning (CoRL), 2020
[3] Seker et al. Imitation and Mirror Systems in Robots through Deep Modality Blending Networks, Neural Networks, 146, pp. 22-35, 2022
Bio: Emre Ugur is an Associate Professor in Dept. of Computer Engineering, Bogazici University, the chair of the Cognitive Science MA Program, the vice-chair of the Dept. of Computer Engineering, and the head of the Cognition, Learning and Robotics (CoLoRs) lab (https://colors.cmpe.boun.edu.tr/). He received his BS, MSc, and Ph.D. degrees in Computer Engineering from Middle East Technical University (METU, Turkey). He was a research assistant in KOVAN Lab. METU (2003-2009); worked as a research scientist at ATR, Japan (2009-2013);
visited Osaka University as a specially appointed Assist.&Assoc. Professor (2015&2016); and worked as a senior researcher at the University of Innsbruck (2013-2016). He was the Principle Investigator of the IMAGINE project supported by the European Commission. He is currently PI of the EXO-AI-FLEX and Deepsym projects supported by TUBITAK. He is interested in robotics, robot learning, and cognitive robotics.