CFP last date
20 December 2024
Reseach Article

Hybrid Approach for Video Interpolation

by R. Geetharamani, Eashwar P., Srikarthikeyan M.K., Varoon S.B.
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 184 - Number 22
Year of Publication: 2022
Authors: R. Geetharamani, Eashwar P., Srikarthikeyan M.K., Varoon S.B.
10.5120/ijca2022922259

R. Geetharamani, Eashwar P., Srikarthikeyan M.K., Varoon S.B. . Hybrid Approach for Video Interpolation. International Journal of Computer Applications. 184, 22 ( Jul 2022), 16-22. DOI=10.5120/ijca2022922259

@article{ 10.5120/ijca2022922259,
author = { R. Geetharamani, Eashwar P., Srikarthikeyan M.K., Varoon S.B. },
title = { Hybrid Approach for Video Interpolation },
journal = { International Journal of Computer Applications },
issue_date = { Jul 2022 },
volume = { 184 },
number = { 22 },
month = { Jul },
year = { 2022 },
issn = { 0975-8887 },
pages = { 16-22 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume184/number22/32448-2022922259/ },
doi = { 10.5120/ijca2022922259 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T01:22:08.139460+05:30
%A R. Geetharamani
%A Eashwar P.
%A Srikarthikeyan M.K.
%A Varoon S.B.
%T Hybrid Approach for Video Interpolation
%J International Journal of Computer Applications
%@ 0975-8887
%V 184
%N 22
%P 16-22
%D 2022
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Video interpolation is a form of video processing in which intermediate frames are generated between existing ones by means of interpolation. The research aims to synthesize several frames in the middle of two adjacent frames of the original video. Interpolation of the frames can be done in either static or dynamic mode. The dynamic approach identifies the number of frames to be interpolated between the reference frames. The reference frames are passed onto three deep learning networks namely Synthesis, Warping, and Refinement where each network performs different functionality to tackle blurring, occlusion and arbitrary non-linear motion. Synthesis is a kernel-based approach where the interpolation is modeled as local convolution over the reference frames and uses a single UNet. Warping makes use of Optical Flow information to back-warp the reference frames and generates the interpolated frame using 2 UNets. Refinement module works with the help of optical flow and warped frames to compute weighted maps which enhance interpolation using 2 UNets and GAN. The raw interpolated output of the deep learning networks yielded a PSNR of 38.87 dB and an SSIM of 97.22% was achieved. In order to further enhance the results, this research combined these deep learning approaches followed by post-processing. The raw interpolated frames are color corrected and fused to form a new frame. Color correction is the process of masking the difference between the interpolated frame and the ground truth over the interpolated frame. Fusion ensures that the maximum pixel value from each input frame is present in the fused frame. Voting is applied on the color corrected frames from the three networks and the fused frame. This voting follows a per-pixel strategy and selects the best pixel from each of the interpolated frames. Datasets used to train and test these modules are DAVIS(Densely Annotated VIdeo Segmentation), Adobe 240, and Vimeo. This hybrid interpolation technique was able to achieve the highest PSNR of 58.98 dB and SSIM of 99.96% which is better than the results of base paper that achieved a PSNR of 32.49 dB and SSIM of 92.7%.

References
  1. Minho Park, Hak Gu Kim, Sangmin Lee and Yong Man Ro. Robust Video Frame Interpolation with Exceptional Motion Map. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 31(2):754–764, 2021.
  2. Jinbo Xing, Wenbo Hu, Yuechen Zhang and Tien-Tsin Wong. Flow-aware synthesis: A generic motion model for video frame interpolation. Computational Visual Media, 7(3):393–405, 2021.
  3. Bo Yan, Weimin Tan, Chuming Lin and Liquan Shen. Fine-Grained Motion Estimation for Video Frame Interpolation. IEEE TRANSACTIONS ON BROADCASTING, 67(1):174–184, 2021.
  4. Yong-Hoon Kwon, Ju Hong Yoon and Min-Gyu Park. Direct Video Frame Interpolation with Multiple Latent Encoders. IEEE Access, 9:32457–32466, 2021.
  5. Minho Park, Sangmin Lee and Yong Man Ro. Video Frame Interpolation via Exceptional Motion-Aware Synthesis. International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1958–1962, 2020.
  6. Avinash Paliwal and Nima Khademi Kalantari. Deep Slow Motion Video Reconstruction with Hybrid Imaging System. IEEE Transactions Pattern Analysis And Machine Intelligence, 42(7):1557–1569, 2020.
  7. Joi Shimizu, Zhengxue Cheng, Heming Sun, Masaru Takeuchi and Jiro Katto. HEVC Video Coding with Deep Learning Based Frame Interpolation. IEEE 9th Global Conference on Consumer Electronics (GCCE), pages 433–434, 2020.
  8. Xiangling Ding, Yifeng Pan, Qing Gu, Jiyou Chen, Gaobo Yang and Yimao Xiong. Detection of Deep Video Frame Interpolation via Learning Dual-Stream Fusion CNN in the Compression Domain. IEEE International Conference on Multimedia and Expo (ICME), pages 1–6, 2021.
  9. Jiankai Zhuang, Zengchang Qin, Jialu Chen and Tao Wan. A Lightweight Network Model for Video Frame Interpolation using Spatial Pyramids. International Conference on Image Processing (ICIP), pages 543–547, 2020.
  10. Chenguang Li, Donghao Gu, Xueyan Ma, Kai Yang, Shaohui Liu and Feng Jiang. Video Frame Interpolation based on Multi-Scale Convolutional Network and Adversarial Training. IEEE 3rd International Conference on Data Science in Cyberspace, pages 553–560, 2018.
  11. Xiongtao Chen, Wenmin Wang and Jinzhuo Wang. Long-Term Video Interpolation with Bidirectional Predictive Network. VCIP,St Petersburg, U.S.A, pages 1–4, 2017.
Index Terms

Computer Science
Information Sciences

Keywords

Fusion Optical Flow Warping Video Frame Interpolation