MINIMAL: Mining Models for Universal Adversarial Triggers
Yaman Kumar Singla, Swapnil Parekh, Somesh Singh, Balaji Krishnamurthy, Rajiv Ratn Shah, Changyou Chen
[AAAI-22] Main Track
Abstract:
It is well known that natural language models are vulnerable to adversarial attacks, which are mostly input-specific in nature. Recently, it has been shown that there also exist input-agnostic attacks in NLP models, called universal adversarial triggers. However, existing methods to craft universal triggersare data intensive. They require large amounts of data samplesto generate adversarial triggers, which are typically inaccessible by attackers. For instance, previous works take 3000 data samplesper class for the SNLI dataset to generate adversarial triggers. Inthis paper, we present a novel data-free approach, MINIMAL, tomine input-agnostic adversarial triggers from models. Using the triggers produced with our data-free algorithm, we reduce theaccuracy of Stanford Sentiment Treebank’s positive class from 93.6% to 9.6%. Similarly, for the Stanford Natural LanguageInference (SNLI), our single-word trigger reduces the accuracyof the entailment class from 90.95% to less than 0.6%. Despitebeing completely data-free, we get equivalent accuracy drops as data-dependent methods
Introduction Video
Sessions where this paper appears
-
Poster Session 1
Thu, February 24 4:45 PM - 6:30 PM (+00:00)
Red 4
-
Poster Session 8
Sun, February 27 12:45 AM - 2:30 AM (+00:00)
Red 4