Reference-Based Speech Enhancement via Feature Alignment and Fusion Network

Huanjing Yue, Wenxin Duo, Jingyu Yang

[AAAI-22] Main Track
Abstract: Speech enhancement aims at recovering a clean speech from a noisy input, which can be classified into single speech enhancement and personalized speech enhancement. Personalized speech enhancement usually utilizes the speaker identity extracted from the noisy speech itself (or a clean reference speech) as a global embedding to guide the enhancement process. Different from them, we observe that the speeches of the same speaker are correlated in terms of frame-level short-time Fourier Transform (STFT) spectrogram. Therefore, we propose reference-based speech enhancement via a feature alignment and fusion network (FAF-Net). Given a noisy speech and a clean reference speech spoken by the same speaker, we first propose a feature level alignment strategy to warp the clean reference with the noisy speech in frame level. Then, we fuse the reference feature with the noisy feature via a similarity-based fusion strategy. Finally, the fused features are skipped connected to the decoder, which generates the enhanced results. Experimental results demonstrate that the performance of the proposed FAF-Net is close to state-of-the-art speech enhancement methods on both DNS and Voice Bank+DEMAND datasets. Our code is available at https://github.com/HieDean/FAF-Net.

Introduction Video

Sessions where this paper appears

  • Poster Session 1

    Thu, February 24 4:45 PM - 6:30 PM (+00:00)
    Red 4
    Add to Calendar

  • Poster Session 8

    Sun, February 27 12:45 AM - 2:30 AM (+00:00)
    Red 4
    Add to Calendar