Entailment Relation Aware Paraphrase Generation
Abhilasha Sancheti, Rachel Rudinger, Balaji Vasan Srinivasan
[AAAI-22] Main Track
Abstract:
We introduce a new task of entailment-relation-aware paraphrase generation which aims at generating a paraphrase conforming to a given entailment relation (e.g. equivalent, forward entailing, or reverse entailing) with respect to the given input. We propose a reinforcement learning based weakly-supervised paraphrasing system, ERAP, that can be trained using existing paraphrase and natural language inference (NLI) corpora without an explicit task-specific corpus.
A combination of automated and human evaluations show that ERAP generates paraphrases conforming to the specified entailment relation and are of good quality compared to baseline and uncontrolled paraphrasing systems. Using ERAP for augmenting training data for downstream textual entailment tasks improves performance over an uncontrolled paraphrasing system, and introduces fewer training artifacts, indicating the benefit of explicit control during paraphrasing.
A combination of automated and human evaluations show that ERAP generates paraphrases conforming to the specified entailment relation and are of good quality compared to baseline and uncontrolled paraphrasing systems. Using ERAP for augmenting training data for downstream textual entailment tasks improves performance over an uncontrolled paraphrasing system, and introduces fewer training artifacts, indicating the benefit of explicit control during paraphrasing.
Introduction Video
Sessions where this paper appears
-
Poster Session 6
Sat, February 26 8:45 AM - 10:30 AM (+00:00)
Red 5
-
Poster Session 9
Sun, February 27 8:45 AM - 10:30 AM (+00:00)
Red 5