Seminar by Erdem Bıyık on Monday December 12th @19:30, Online

Dr. Erdem Bıyık from University of California, Berkeley will be the next guest on ROMER Talks on Monday December 12th @19:30.

The seminar will be held online, the information on the Zoom session is given below. Please feel free to share and join us at the seminar.

Zoom link: https://zoom.us/j/93443337016?pwd=WlhreU0zTy9mZVkzRVIxRlhwWFZzQT09 Please feel free to share and join us at the seminar.

Title: Learning Preferences for Interactive Autonomy

Abstract: In human-robot interaction or more generally multi-agent systems, we often have decentralized agents that need to perform a task together. In such settings, it is crucial to have the ability to anticipate the actions of other agents. Without this ability, the agents are often doomed to perform very poorly. Humans are usually good at this, and it is mostly because we can have good estimates of what other agents are trying to do. We want to give such an ability to robots through reward learning and partner modeling. In this talk, I am going to talk about active learning approaches to this problem and how we can leverage preference data to learn objectives. I am going to show how preferences can help reward learning in the settings where demonstration data may fail, and how partner-modeling enables decentralized agents to cooperate efficiently.

Bio: Erdem Bıyık is a postdoctoral researcher at the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley. He has received his B.Sc. degree from Bilkent University, Turkey, in 2017; and Ph.D. degree from Stanford University in 2022. His research interests lie in the intersection of robotics, artificial intelligence, machine learning and game theory. He is interested in enabling robots to actively learn from various forms of human feedback and designing robot policies to improve the efficiency of multi-agent systems both in cooperative and competitive settings. He also worked at Google as a research intern in 2021 where he adapted his active robot learning algorithms to recommender systems. He will join the University of Southern California as an assistant professor in 2023.

https://www.linkedin.com/in/erdemb

https://people.eecs.berkeley.edu/~ebiyik/

Schedule

Seminar by Çağatay Başdoğan on November 30th @19:30, Online

The seminar will be held online, the information on the Zoom session is given below. Please feel free to share and join us at the seminar.

Zoom link: https://zoom.us/j/93443337016?pwd=WlhreU0zTy9mZVkzRVIxRlhwWFZzQT09

Title: An adaptive admittance controller for collaborative drilling with a robot based on subtask classification via deep learning

Abstract: We propose a supervised learning approach based on an Artificial Neural Network (ANN) model for real-time classification of subtasks in a physical human–robot interaction (pHRI) task involving contact with a stiff environment. In this regard, we consider three subtasks for a given pHRI task: Idle, Driving, and Contact. Based on this classification, the parameters of an admittance controller that regulates the interaction between human and robot are adjusted adaptively in real time to make the robot more transparent to the operator (i.e. less resistant) during the Driving phase and more stable during the Contact phase. The Idle phase is primarily used to detect the initiation of task. Experimental results have shown that the ANN model can learn to detect the subtasks under different admittance controller conditions with an accuracy of 98% for 12 participants. Finally, we show that the admittance adaptation based on the proposed subtask classifier leads to 20% lower human effort (i.e. higher transparency) in the Driving phase and 25% lower oscillation amplitude (i.e. higher stability) during drilling in the Contact phase compared to an admittance controller with fixed parameters.

Bio: Prof. Basdogan is a member of faculty in College of Engineering at Koc University since 2002. Before joining to Koc University, he was a senior member of technical staff at Information and Computer Science Division of NASA-Jet Propulsion Laboratory of California Institute of Technology (Caltech) from 1999 to 2002. At JPL, he worked on 3D reconstruction of Martian models from stereo images captured by a rover and their haptic visualization on Earth. He moved to JPL from Massachusetts Institute of Technology (MIT) where he was a research scientist and principal investigator at MIT Research Laboratory of Electronics and a member of the MIT Touch Lab from 1996 to 1999. At MIT, he was involved in the development of algorithms that enable a user to touch and feel virtual objects through a haptic device (a force-reflecting robotic arm). He received his Ph.D. degree from Southern Methodist University in 1994 and worked on medical simulation and robotics for Musculographics Inc. at Northwestern University Research Park for two years before moving to MIT. Prof. Basdogan conducts research and development in the areas of human-machine interfaces, control systems, robotics, mechatronics, human-robot interaction, biomechanics, computer graphics, and virtual reality technology. In particular, he is known for his work in the area of human and machine haptics (sense of touch) with applications to medical robotics and simulation, robotic path planning, micro/nano/optical tele-manipulation, human-robot interaction, molecular docking, information visualization, and human perception and cognition. In addition to serving in the program and organizational committees of several conferences and journals, he also chaired the IEEE World Haptics Conference in 2011.

Prof. Dr. Cagatay Basdogan (Google Scholar)

Director of Robotics and Mechatronics Laboratory (http://rml.ku.edu.tr)

Faculty Member at KUIS AI-Center (http://ai.ku.edu.tr)

Koc University (http://www.ku.edu.tr)

Sariyer, Istanbul, 34450

Phone: +90 212 338 1721

e-mail: cbasdogan@ku.edu.tr

http://home.ku.edu.tr/~cbasdogan/

Seminar by Emre Uğur on November 18th @12.30 BMB5, Dept. of Computer Eng.

Assoc. Prof. Emre Uğur from Boğaziçi University will be the next guest on ROMER Talks on Friday November 18th @12:30. The face-to-face seminar will be held at BMB-5, at the Department of Computer Engineering.

Title: Learning Complex Robotic Skills via Conditional Neural Movement Primitives

Abstract: Predicting the consequences of one's own actions is an important requirement for safe human-robot collaboration and its application to personal robotics. Neurophysiological and behavioral data suggest that the human brain benefits from internal forward models that continuously predict the outcomes of the generated motor commands for trajectory planning, movement control, and multi-step planning. In this talk, I will present our recent Learning from Demonstration framework [1] that is based on Conditioned Neural Processes. CNMPs extract the prior knowledge directly from the training data by sampling observations from it, and use it to predict a conditional distribution over any other target points. CNMPs specifically learn complex temporal multi-modal sensorimotor relations in connection with external parameters and goals; produce movement trajectories in joint or task space; and execute these trajectories through a high-level feedback control loop. Conditioned with an external goal that is encoded in the sensorimotor space of the robot, predicted sensorimotor trajectory that is expected to be observed during the successful execution of the task is generated by the CNMP, and the corresponding motor commands are executed. After presenting the basic CNMP framework, I will talk about how to form flexible skills combining Learning from Demonstration and Reinforcement Learning via Representation Sharing [2], and the deep modality blending networks (DMBN) [3], which creates a common latent space from multi-modal experience of a robot by blending multi-modal signals with a stochastic weighting mechanism.

References:
[1] Seker et al. Conditional Neural Movement Primitives, Robotics:
Science and Systems (RSS), 2019
[2] Akbulut et al. ACNMP: Flexible Skill Formation with Learning from
Demonstration and Reinforcement Learning via Representation Sharing,
Conference on Robot Learning (CoRL), 2020
[3] Seker et al. Imitation and Mirror Systems in Robots through Deep
Modality Blending Networks, Neural Networks, 146, pp. 22-35, 2022

Bio: Emre Ugur is an Associate Professor in Dept. of Computer Engineering, Bogazici University, the chair of the Cognitive Science MA Program, the vice-chair of the Dept. of Computer Engineering, and the head of the Cognition, Learning and Robotics (CoLoRs) lab (https://colors.cmpe.boun.edu.tr/). He received his BS, MSc, and Ph.D. degrees in Computer Engineering from Middle East Technical University (METU, Turkey). He was a research assistant in KOVAN Lab. METU (2003-2009); worked as a research scientist at ATR, Japan (2009-2013); visited Osaka University as a specially appointed Assist.&Assoc. Professor (2015&2016); and worked as a senior researcher at the University of Innsbruck (2013-2016). He was the Principle Investigator of the IMAGINE project supported by the European Commission. He is currently PI of the EXO-AI-FLEX and Deepsym projects supported by TUBITAK. He is interested in robotics, robot learning, and cognitive robotics.

Seminar by İbrahim Volkan İşler on November 9th @19:30

Prof. Dr. İbrahim Volkan İşler from the Computer Science & Engineering Department of the University of Minnesota will be the next guest on ROMER Talks on Wednesday, November 09th @19:30. The seminar will be held online, the information on the Zoom session is given below. Please feel free to share and join us at the seminar.

Join Zoom Meeting: https://zoom.us/j/94177086079?pwd=eWtXeFFOWXJsV0kzdUJGTW1pUGhGdz09

Meeting ID: 941 7708 6079

Passcode: 349877

Title: From Surveying Farms to Tidying our Homes with Robots

Abstract: For decades, the robotics community has been working on developing intelligent autonomous machines that can perform complex tasks in unstructured environments. We are now closer than ever to delivering on this promise. Robotic systems are being developed, tested and deployed for a wide range of applications. In this talk, I will present our work on building robots for agriculture and home automation which are two application domains with distinct sets of associated challenges. In agriculture, robots must be capable of operating on very large farms under rough conditions while maintaining precision to efficiently perform tasks such as yield mapping, fruit picking and weeding. In these applications, the state-of-the-art perception algorithms are capable of generating intermediate geometric representations of the environment. However, the resulting planning problems are often hard. I will present some of our work on tracking and mapping and give examples of field deployments. In home automation, the robots must be able to handle a large variety of objects and clutter. In such settings, generating precise geometric models as intermediate representations is not always possible. To address this challenge, I will present our recent and ongoing work on developing state representations for coupled perception and action planning for representative home automation applications such as decluttering.

Bio: Ibrahim Volkan Isler is a Professor of Computer Science & Engineering at the University of Minnesota and the head of Samsung AI Center in NY.

https://www-users.cse.umn.edu/~isler/

https://rsn.umn.edu/

https://www.linkedin.com/in/volkan-isler

Seminar by Onur Özcan on 21 October at 12.30

The seminar will be held at Sevim Tan Auditorium (D231), Electrical and Electronics Engineering Department, METU. Please feel free to share and join us at the seminar.

Asst. Prof. Dr. Onur Özcan, Bilkent University Mechanical Engineering Department

Title: Miniature Bio-Inspired Robots: Rigid to Compliant

Abstract: As robotics researchers, we try to build highly mobile, efficient, and robust robots, similar to living organisms in terms of locomotion performance. Achieving high-performance locomotion becomes more of a challenge as the size scale of the robot decreases. We observe miniature robots' biological counterparts, i.e., insects and small animals, having extraordinary locomotion capabilities such as running, jumping, and climbing robustly over various terrains. Despite the recent advances in the field of miniature robotics, the design and capabilities of miniature robots are still limited due to the unavailability of fabrication methods, the rigidity of our mechanical structures, and our poor grasp of the physics behind miniature robot locomotion. This talk addresses these challenges, focusing on the mechanical design and fabrication of Bilkent miniature robots, the effects of their structural compliance on robot locomotion, and the modeling efforts conducted to understand locomotion at the miniature scale.

Bio: Dr. Onur Özcan creates bio-inspired miniature robots through research at the interface of mechanical engineering and robotics. He received his B.S. (2007) in Mechatronics Engineering at Sabancı University and his M.S. (2010) and Ph.D. (2012) in Mechanical Engineering at Carnegie Mellon University in Pittsburgh, Pennsylvania, USA, where he worked on the control and automation of tip-directed nanoscale fabrication. As a postdoctoral fellow, he conducted research on the fabrication and control of miniature crawling robots at Harvard University's School of Engineering and Applied Sciences and the Wyss Institute for Biologically Inspired Engineering from April 2012 to January 2015. Following his postdoctoral position, he joined Bilkent University Mechanical Engineering Department as an Assistant Professor in January 2015. He leads the Bilkent Miniature Robotics Lab, and he is active in research in

miniature robotics and soft robotics fields. He runs several Tübitak-funded projects on miniature and soft robots, he has more than 35 publications in related conferences and journals, and he serves as an associate editor for the soft robotics field in IEEE Robotics and Automation Letters.

http://web3.bilkent.edu.tr/minirobots/

Seminar by Mehmet Mutlu on 19 October at 10.00

Mehmet Mutlu from ANYbotics, Zurich, Switzerland will be the next guest on ROMER Talks on Wednesday, October 19th @10:00. The seminar will be held at ARC-210, Ayaslı Research Center. Please feel free to share and join us at the seminar.

Title: ANYbotics: Creating a Workforce of Autonomous Robots

Abstract: Mobile robots are gradually entering the job market to take over dangerous, dirty and repetitive tasks from people and bring superhuman precision. For the first time in history, advanced ocomotion capabilities of legged robots enable them to enter generic industrial work environments that are not specifically designed/simplified for robots. ANYmal is an autonomous quadrupedal industrial inspection robot designed and produced by ANYbotics AG, Switzerland. ANYbotics is a scale-up company that is a spin-off from ETH-Zurich. ANYbotics' end-to-end robotic solution automates industrial inspections.

ANYbotics' robots ANYmal and ANYmal X provide plant operators with the information to maximize equipment uptime and improve safety while reducing costs. In this talk, I will (i) introduce ANYbotics and ANYmal; (ii) zoom into electromechanical design of ANYdrive actuator subsystem of ANYmal; (iii) elaborate on verification and reliability tests for robust legged robots; and (iv) present a selection of explosion-proof product design techniques that are used in the World's first explosion-proof autonomous legged robot. The talk will offer a peephole into a world-leading robotics company and demystify the life of an R&D engineer in it.

https://www.anybotics.com/

https://ch.linkedin.com/in/mehmutlu

Bio: Mehmet has received B.S. and M.S. degrees in Electrical and Electronics Engineering together with a minor degree in Mechatronics from Middle East Technical University, Turkey. He received a dual Ph.D. degree in Robotics, Control and Intelligent Systems from the École Polytechnique Fédérale de Lausanne (EPFL, Switzerland) and Instituto Superior Técnico Lisboa (IST-Lisbon, Portugal). He is currently the electrical design lead of ANYmal X project at ANYbotics AG. He is designing electromechanical hardware for the World's first explosion-proof quadrupedal industrial inspection robot.


Last Updated:
05/12/2022 - 17:21