TY - GEN
T1 - Domain Adaptive Video Semantic Segmentation via Cross-Domain Moving Object Mixing
AU - Cho, Kyusik
AU - Lee, Suhyeon
AU - Seong, Hongje
AU - Kim, Euntai
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The network trained for domain adaptation is prone to bias toward the easy-to-transfer classes. Since the ground truth label on the target domain is unavailable during training, the bias problem leads to skewed predictions, forgetting to predict hard-to-transfer classes. To address this problem, we propose Cross-domain Moving Object Mixing (CMOM) that cuts several objects, including hard-to-transfer classes, in the source domain video clip and pastes them into the target domain video clip. Unlike image-level domain adaptation, the temporal context should be maintained to mix moving objects in two different videos. Therefore, we de-sign CMOM to mix with consecutive video frames, so that unrealistic movements are not occurring. We additionally propose Feature Alignment with Temporal Context (FATC) to enhance target domain feature discriminability. FATC exploits the robust source domain features, which are trained with ground truth labels, to learn discriminative target do-main features in an unsupervised manner by filtering unreliable predictions with temporal consensus. We demonstrate the effectiveness of the proposed approaches through extensive experiments. In particular, our model reaches mIoU of 53.81% on VIPER → Cityscapes-Seq benchmark and mIoU of 56.31% on SYNTHIA-Seq → Cityscapes-Seq benchmark, surpassing the state-of-the-art methods by large margins.
AB - The network trained for domain adaptation is prone to bias toward the easy-to-transfer classes. Since the ground truth label on the target domain is unavailable during training, the bias problem leads to skewed predictions, forgetting to predict hard-to-transfer classes. To address this problem, we propose Cross-domain Moving Object Mixing (CMOM) that cuts several objects, including hard-to-transfer classes, in the source domain video clip and pastes them into the target domain video clip. Unlike image-level domain adaptation, the temporal context should be maintained to mix moving objects in two different videos. Therefore, we de-sign CMOM to mix with consecutive video frames, so that unrealistic movements are not occurring. We additionally propose Feature Alignment with Temporal Context (FATC) to enhance target domain feature discriminability. FATC exploits the robust source domain features, which are trained with ground truth labels, to learn discriminative target do-main features in an unsupervised manner by filtering unreliable predictions with temporal consensus. We demonstrate the effectiveness of the proposed approaches through extensive experiments. In particular, our model reaches mIoU of 53.81% on VIPER → Cityscapes-Seq benchmark and mIoU of 56.31% on SYNTHIA-Seq → Cityscapes-Seq benchmark, surpassing the state-of-the-art methods by large margins.
KW - Algorithms: Video recognition and understanding (tracking, action recognition, etc.)
KW - Image recognition and understanding (object detection, categorization, segmentation, scene modeling, visual reasoning)
UR - http://www.scopus.com/inward/record.url?scp=85149002886&partnerID=8YFLogxK
U2 - 10.1109/WACV56688.2023.00056
DO - 10.1109/WACV56688.2023.00056
M3 - Conference contribution
AN - SCOPUS:85149002886
T3 - Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023
SP - 489
EP - 498
BT - Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 23rd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2023
Y2 - 3 January 2023 through 7 January 2023
ER -