TY - JOUR
T1 - Textural Detail Preservation Network for Video Frame Interpolation
AU - Yoon, Kihwan
AU - Huh, Jingang
AU - Kim, Yong Han
AU - Kim, Sungjei
AU - Jeong, Jinwoo
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2023
Y1 - 2023
N2 - The subjective image quality of the Video Frame Interpolation (VFI) result depends on whether image features such as edges, textures and blobs are preserved. With the development of deep learning, various algorithms have been proposed and the objective results of VFI have significantly improved. Moreover, perceptual loss has been used in a method that enhances subjective quality by preserving the features of the image, and as a result, the subjective quality is improved. Despite the quality enhancements achieved in VFI, no analysis has been performed to preserve specific features in the interpolated frames. Therefore, we conducted an analysis to preserve textural detail, such as film grain noise, which can represent the texture of an image, and weak textures, such as droplets or particles. Based on our analysis, we identify the importance of synthesis networks in textural detail preservation and propose an enhanced synthesis network, the Textural Detail Preservation Network (TDPNet). Furthermore, based on our analysis, we propose a Perceptual Training Method (PTM) to address the issue of degraded Peak Signal-to-Noise Ratio (PSNR) when simply applying perceptual loss and to preserve more textural detail. We also propose a Multi-scale Resolution Training Method (MRTM) to address the issue of poor performance when testing datasets with a resolution different from that of the training dataset. The experimental results of the proposed network was outperformed in LPIPS and DISTS on the Vimeo90K, HD, SNU-FILM and UVG datasets compared with the state-of-the-art VFI algorithms, and the subjective results were also outperformed. Furthermore, applying PTM improved PSNR results by an average of 0.293dB compared to simply applying perceptual loss.
AB - The subjective image quality of the Video Frame Interpolation (VFI) result depends on whether image features such as edges, textures and blobs are preserved. With the development of deep learning, various algorithms have been proposed and the objective results of VFI have significantly improved. Moreover, perceptual loss has been used in a method that enhances subjective quality by preserving the features of the image, and as a result, the subjective quality is improved. Despite the quality enhancements achieved in VFI, no analysis has been performed to preserve specific features in the interpolated frames. Therefore, we conducted an analysis to preserve textural detail, such as film grain noise, which can represent the texture of an image, and weak textures, such as droplets or particles. Based on our analysis, we identify the importance of synthesis networks in textural detail preservation and propose an enhanced synthesis network, the Textural Detail Preservation Network (TDPNet). Furthermore, based on our analysis, we propose a Perceptual Training Method (PTM) to address the issue of degraded Peak Signal-to-Noise Ratio (PSNR) when simply applying perceptual loss and to preserve more textural detail. We also propose a Multi-scale Resolution Training Method (MRTM) to address the issue of poor performance when testing datasets with a resolution different from that of the training dataset. The experimental results of the proposed network was outperformed in LPIPS and DISTS on the Vimeo90K, HD, SNU-FILM and UVG datasets compared with the state-of-the-art VFI algorithms, and the subjective results were also outperformed. Furthermore, applying PTM improved PSNR results by an average of 0.293dB compared to simply applying perceptual loss.
KW - Video frame interpolation
KW - perceptual loss
KW - synthesis network
KW - textural detail preservation
UR - http://www.scopus.com/inward/record.url?scp=85164802510&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2023.3294964
DO - 10.1109/ACCESS.2023.3294964
M3 - Article
AN - SCOPUS:85164802510
SN - 2169-3536
VL - 11
SP - 71994
EP - 72006
JO - IEEE Access
JF - IEEE Access
ER -