Title: Reinforcement Learning for Dynamical Systems with Temporal Logic Specifications
Abstract: Dynamical systems such as drones, mobile robots, or autonomous cars are envisioned to achieve complex specifications which may include spatial (e.g., regions of interest), temporal (e.g., time bounds), and logical (e.g., priority, dependency, concurrency among tasks) requirements. As specifications get more complex, encoding them via algebraic equations gets harder. Alternatively, such specifications can be compactly expressed using temporal logics (TL). In this talk, I will address the problem of learning optimal control policies for satisfying TL specifications in the face of uncertainty. Standard reinforcement learning (RL) algorithms are not directly applicable when the objective is to satisfy a TL specification. To overcome this limitation, I will formulate an approximate problem that can be solved via RL and present the suboptimality bound of the proposed solution. Then, I will discuss the scalability of learning with TL objectives and present a more tractable RL formulation.
Bio: Derya Aksaray is an Assistant Professor in the Department of Electrical and Computer Engineering at Northeastern University. She is also a core member of the Institute for Experiential Robotics. Previously, she was an Assistant Professor in the Department of Aerospace Engineering and Mechanics at the University of Minnesota from 2018-2022. She also held post-doctoral researcher positions at the Massachusetts Institute of Technology from 2016-2017 and at Boston University from 2014-2016. She received her Ph.D. degree in Aerospace Engineering from the Georgia Institute of Technology and her B.S. degree in Aerospace Engineering from Middle East Technical University. Her research interests lie primarily in the areas of control theory, formal methods, and machine learning with applications to autonomous systems and aerial robotics.