Self-Supervised Spatiotemporal Representation Learning by Exploiting Video Continuity
Hanwen Liang, Niamul Quader, Zhixiang Chi, Lizhe Chen, Peng Dai, Juwei Lu, Yang Wang
[AAAI-22] Main Track
Abstract:
Recent self-supervised video representation learning methods have found significant success by exploring essential properties of videos, e.g. speed, temporal order, etc.
This work exploits an essential yet under-explored property of videos, the \textit{video continuity}, to obtain supervision signals in self-supervised representation learning.
Specifically, we formulate three novel continuity-related pretext tasks, i.e. continuity justification, discontinuity localization, and missing section approximation, that supervise a shared backbone for video representation learning.
This self-supervision approach, termed as Continuity Perception Network (CPNet), encourages its backbone network to learn local and long-ranged motion and context representations and outperforms prior arts on multiple downstream tasks, such as action recognition, video retrieval, and action localization.
Also, the video continuity can be complementary to other video properties for representation learning, and integrating the proposed pretext task to prior arts can yield much performance gains.
This work exploits an essential yet under-explored property of videos, the \textit{video continuity}, to obtain supervision signals in self-supervised representation learning.
Specifically, we formulate three novel continuity-related pretext tasks, i.e. continuity justification, discontinuity localization, and missing section approximation, that supervise a shared backbone for video representation learning.
This self-supervision approach, termed as Continuity Perception Network (CPNet), encourages its backbone network to learn local and long-ranged motion and context representations and outperforms prior arts on multiple downstream tasks, such as action recognition, video retrieval, and action localization.
Also, the video continuity can be complementary to other video properties for representation learning, and integrating the proposed pretext task to prior arts can yield much performance gains.
Introduction Video
Sessions where this paper appears
-
Poster Session 1
Thu, February 24 4:45 PM - 6:30 PM (+00:00)
Red 3
-
Poster Session 11
Mon, February 28 12:45 AM - 2:30 AM (+00:00)
Red 3