Goal Recognition as Reinforcement Learning

Leonardo Amado, Reuth Mirsky, Felipe Meneguzzi

[AAAI-22] Main Track
Abstract: Most approaches for goal recognition rely on specifications of the possible dynamics of the actor in the environment when pursuing a goal. These specifications suffer from two key issues. First, encoding these dynamics requires careful design by a domain expert, which is often not robust to noise at recognition time. Second, existing approaches often need costly real-time computations to reason about the likelihood of each potential goal. In this paper, we develop a framework that combines model-free reinforcement learning and goal recognition to alleviate the need for careful, manual domain design, and the need for costly online executions. This framework consists of two main stages: Offline learning of policies or utility functions for each potential goal, and online inference. We provide a first instance of this framework using tabular Q-learning for the learning stage, as well as three measures that can be used to perform the inference stage. The resulting instantiation achieves state-of-the-art performance against goal recognizers on standard evaluation domains and superior performance in noisy environments.

Introduction Video

Sessions where this paper appears

  • Poster Session 3

    Fri, February 25 8:45 AM - 10:30 AM (+00:00)
    Blue 4
    Add to Calendar

  • Poster Session 12

    Mon, February 28 8:45 AM - 10:30 AM (+00:00)
    Blue 4
    Add to Calendar