Optimistic Initialization for Exploration in Continuous Control
Samuel Lobel, Omer Gottesman, Cam Allen, Akhil Bagaria, George Konidaris
[AAAI-22] Main Track
Abstract:
Optimistic initialization underpins many theoretically sound exploration schemes in tabular domains; however, in the deep function approximation setting, optimism can quickly disappear if initialized naively. We propose a framework for more effectively incorporating optimistic initialization into reinforcement learning for continuous control. Our approach uses metric information about the state-action space to estimate which transitions are still unexplored, and explicitly maintains the initial Q-value optimism for the corresponding state-action pairs. We also develop methods for efficiently approximating these training objectives, and for incorporating domain knowledge into the optimistic envelope to improve sample efficiency. We empirically evaluate these approaches on a variety of hard exploration problems in continuous control, where our method outperforms existing exploration techniques.
Introduction Video
Sessions where this paper appears
-
Poster Session 6
Blue 1 -
Poster Session 12
Blue 1