Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Seminar by Emre Uğur on November 18th @12.30 BMB5, Dept. of Computer Eng.

Title: Learning Complex Robotic Skills via Conditional Neural Movement Primitives

Abstract: Predicting the consequences of one’s own actions is an important requirement for safe human-robot collaboration and its application to personal robotics. Neurophysiological and behavioral data suggest that the human brain benefits from internal forward models that continuously predict the outcomes of the generated motor
commands for trajectory planning, movement control, and multi-step planning. In this talk, I will present our recent Learning from Demonstration framework [1] that is based on Conditioned Neural Processes. CNMPs extract the prior knowledge directly from the training data by sampling observations from it, and use it to predict a conditional distribution over any other target points. CNMPs specifically learn complex temporal multi-modal sensorimotor relations in connection with external parameters and goals; produce movement trajectories in joint or task space; and execute these trajectories through a high-level feedback control loop. Conditioned with an external goal that is encoded in the sensorimotor space of the robot, predicted sensorimotor trajectory that is expected to be observed
during the successful execution of the task is generated by the CNMP, and the corresponding motor commands are executed. After presenting the basic CNMP framework, I will talk about how to form flexible skills combining Learning from Demonstration and Reinforcement Learning via Representation Sharing [2], and the deep modality
blending networks (DMBN) [3], which creates a common latent space from multi-modal experience of a robot by blending multi-modal signals with a stochastic weighting mechanism.

References:
[1] Seker et al. Conditional Neural Movement Primitives, Robotics: Science and Systems (RSS), 2019
[2] Akbulut et al. ACNMP: Flexible Skill Formation with Learning from Demonstration and Reinforcement Learning via Representation Sharing, Conference on Robot Learning (CoRL), 2020
[3] Seker et al. Imitation and Mirror Systems in Robots through Deep Modality Blending Networks, Neural Networks, 146, pp. 22-35, 2022

Bio: Emre Ugur is an Associate Professor in Dept. of Computer Engineering, Bogazici University, the chair of the Cognitive Science MA Program, the vice-chair of the Dept. of Computer Engineering, and the head of the Cognition, Learning and Robotics (CoLoRs) lab (https://colors.cmpe.boun.edu.tr/). He received his BS, MSc, and Ph.D. degrees in Computer Engineering from Middle East Technical University (METU, Turkey). He was a research assistant in KOVAN Lab. METU (2003-2009); worked as a research scientist at ATR, Japan (2009-2013);
visited Osaka University as a specially appointed Assist.&Assoc. Professor (2015&2016); and worked as a senior researcher at the University of Innsbruck (2013-2016). He was the Principle Investigator of the IMAGINE project supported by the European Commission. He is currently PI of the EXO-AI-FLEX and Deepsym projects supported by TUBITAK. He is interested in robotics, robot learning, and cognitive robotics.