Title: Scalable Behavior
Abstract: Motion forecasting for autonomous driving is a challenging task because complex driving scenarios result in a heterogeneous mix of static and dynamic inputs. It is an open problem how best to represent and fuse information about road geometry, lane connectivity, time-varying traffic light state, and history of a dynamic set of agents and their interactions into an effective encoding. To model this diverse set of input features, many approaches proposed to design an equally complex system with a diverse set of modality specific modules. This results in systems that are difficult to scale, extend, or tune in rigorous ways to trade off quality and efficiency.
First, we present Wayformer, a family of attention based architectures for motion forecasting that are simple and homogeneous. Wayformer offers a compact model description consisting of an attention based scene encoder and a decoder. Second, we propose a language modeling approach (MotionLM) to learn how to predict joint distributions over interactive agent futures in a single autoregressive decoding process. This model’s sequential factorization enables temporally causal conditional rollouts. The proposed approaches establish new state-of-the-art performance for single and multi-agent motion prediction on the Waymo Open Motion Dataset.
Bio: Rami Al-Rfou is a Staff Research Scientist at Waymo Research. He leads a team to build foundational models for motion and driving based on his expertise in large language models. Previously, Rami was a technical lead for assisted writing applications such as SmartReply at Google Research. His research focused on improving pretraining large language modeling through token-free architectures, synthetic datasets constructed with knowledge-base based generative models, and improved sampling strategies for multilingual datasets. These pretrained language models, trained on +100 languages, are being utilized in query understanding, web page understanding, semantic search, and response ranking in conversations.