Dual Task Framework for Improving Persona-Grounded Dialogue Dataset

Minju Kim, Beong-Woo Kwak, Youngwook Kim, Hong-In Lee, Seung-Won Hwang, Jinyoung Yeo

[AAAI-22] Main Track
Abstract: This paper introduces a simple yet effective data-centric approach for the task of improving persona-conditioned dialogue agents. Prior model-centric approaches unquestioningly depend on the raw crowdsourced benchmark datasets such as Persona-Chat. In contrast, we aim to fix annotation artifacts in benchmarking, which is orthogonally applicable to any dialogue model. Specifically, we augment relevant personas to improve dialogue dataset/agent, by leveraging the primal-dual structure of the two tasks, predicting dialogue responses and personas based on each other. Experiments on Persona-Chat show that our approach outperforms pre-trained LMs by an 11.7 point gain in terms of accuracy.

Introduction Video

Sessions where this paper appears

  • Poster Session 3

    Fri, February 25 8:45 AM - 10:30 AM (+00:00)
    Red 5
    Add to Calendar

  • Poster Session 8

    Sun, February 27 12:45 AM - 2:30 AM (+00:00)
    Red 5
    Add to Calendar

  • Oral Session 8

    Sun, February 27 2:30 AM - 3:45 AM (+00:00)
    Red 5
    Add to Calendar