Interpretable Privacy Preservation of Text Representations Using Vector Steganography
Geetanjali Bihani
[AAAI-22] Doctoral Consortium
Abstract:
Contextual word representations generated by language models learn spurious associations present in the training corpora. Adversaries can exploit these associations to reverse-engineer the private attributes of entities mentioned in the training corpora. These findings have led to efforts towards minimizing the privacy risks of language models. However, existing approaches lack interpretability, compromise on data utility and fail to provide privacy guarantees. Thus, the goal of my doctoral research is to develop interpretable approaches towards privacy preservation of text representations that maximize data utility retention and guarantee privacy. To this end, I aim to study and develop methods to incorporate steganographic modifications within the vector geometry to obfuscate underlying spurious associations and retain the distributional semantic properties learnt during training.
Sessions where this paper appears
-
Poster Session 3
Fri, February 25 8:45 AM - 10:30 AM (+00:00)
Red 5
-
Poster Session 1
Thu, February 24 4:45 PM - 6:30 PM (+00:00)
Red 5