Visually Guided Sound Source Separation and Localization using Self-Supervised Motion Representations
Zhu, Lingyu; Rahtu, Esa (2022-02-15)
Zhu, Lingyu
Rahtu, Esa
IEEE
15.02.2022
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202210267862
https://urn.fi/URN:NBN:fi:tuni-202210267862
Kuvaus
Peer reviewed
Tiivistelmä
In this paper, we perform audio-visual sound source separation, i.e. to separate component audios from a mixture based on the videos of sound sources. Moreover, we aim to pinpoint the source location in the input video sequence. Recent works have shown impressive audio-visual separation results when using prior knowledge of the source type (e.g. human playing instrument) and pre-trained motion detectors (e.g. keypoints or optical flows). However, at the same time, the models are limited to a certain application domain. In this paper, we address these limitations and make the following contributions: i) we propose a two-stage architecture, called Appearance and Motion network (AM-net), where the stages specialise to appearance and motion cues, respectively. The entire system is trained in a self-supervised manner; ii) we introduce an Audio-Motion Embedding (AME) framework to explicitly represent the motions that related to sound; iii) we propose an audio-motion transformer architecture for audio and motion feature fusion; iv) we demonstrate state-of-the-art performance on two challenging datasets (MUSIC-21 and AVE) despite the fact that we do not use any pre-trained keypoint detectors or optical flow estimators. Project page: https://lyzhu.github.io/self-supervised-motion-representations
Kokoelmat
- TUNICRIS-julkaisut [17007]