Online Apprenticeship Learning

Lior Shani, Tom Zahavy, Shie Mannor

[AAAI-22] Main Track
Abstract: In Apprenticeship Learning (AL), we are given a Markov Decision Process (MDP) without access to the cost function. Instead, we observe trajectories sampled by an expert that acts according to some policy. The goal is to find a policy that matches the expert's performance on some predefined set of cost functions.

We introduce an online variant of AL (Online Apprenticeship Learning; OAL), where the agent is expected to perform comparably to the expert while interacting with the environment. We show that the OAL problem can be effectively solved by combining two mirror descent based no-regret algorithms: one for policy optimization and another for learning the worst case cost. By employing optimistic exploration, we derive a convergent algorithm with $O(\sqrt{K})$ regret, where $K$ is the number of interactions with the MDP, and an additional linear error term that depends on the amount of expert trajectories available. Importantly, our algorithm avoids the need to solve an MDP at each iteration, making it more practical compared to prior AL methods. Finally, we implement a deep variant of our algorithm which shares some similarities to GAIL \cite{ho2016generative}, but where the discriminator is replaced with the costs learned by the OAL problem. Our simulations suggest that OAL performs well in high dimensional control problems.

Introduction Video

Sessions where this paper appears

  • Poster Session 3

    Fri, February 25 8:45 AM - 10:30 AM (+00:00)
    Blue 1
    Add to Calendar

  • Poster Session 7

    Sat, February 26 4:45 PM - 6:30 PM (+00:00)
    Blue 1
    Add to Calendar