Has CEO Gender Bias Really Been Fixed? Adversarial Attacking and Improving Gender Fairness in Image Search

Yunhe Feng, Chirag Shah

[AAAI-22] AI for Social Impact Track
Abstract: Gender bias is one of the most common and well-studied demographic biases in information retrieval, and in general in AI systems. After discovering and reporting that gender bias for certain professions could change searchers' worldviews, mainstreaming image search engines, such as Google, quickly took actions to correct and fix such a bias. However, given the nature of these systems, viz., being opaque, it is unclear if they addressed unequal gender representation and gender stereotypes in image search results systematically and in a sustainable way. In this paper, we propose adversarial attack queries composing of professions and countries (e.g., `CEO United States') to investigate whether gender bias is thoroughly mitigated by image search engines. Our experiments on Google, Baidu, Naver, and Yandex Image Search show that the proposed attack can trigger high levels of gender bias in image search results very effectively. To defend against such attacks and mitigate gender bias, we design and implement three novel re-ranking algorithms -- epsilon-greedy algorithm, relevance-aware swapping algorithm, and fairness-greedy algorithm, to re-rank returned images for given image queries. Experiments on both simulated (three typical gender distributions) and real-world (18,904 images) datasets demonstrate the proposed algorithms can mitigate gender bias effectively.

Introduction Video

Sessions where this paper appears

  • Poster Session 3

    Fri, February 25 8:45 AM - 10:30 AM (+00:00)
    Red 6
    Add to Calendar

  • Poster Session 10

    Sun, February 27 4:45 PM - 6:30 PM (+00:00)
    Red 6
    Add to Calendar

  • Oral Session 3

    Fri, February 25 10:30 AM - 11:45 AM (+00:00)
    Red 6
    Add to Calendar