Wei-Sheng Lai, Jia-Bin Huang, Oliver Wang, Eli Shechtman, Ersin Yumer, and Ming-Hsuan Yang
European Conference on Computer Vision (ECCV), 2018
[Project page][Paper]
- Introduction
- Citation
- Requirements and Dependencies
- Installation
- Dataset
- Apply Pre-trained Models
- Training and Testing
- Image processing algorithms
Our method takes the original unprocessed and per-frame processed videos as inputs to produce a temporally consistent video. Our approach is agnostic to specific image processing algorithms applied on the original video.
If you find the code and datasets useful in your research, please cite:
@inproceedings{Lai-ECCV-2018,
author = {Lai, Wei-Sheng and Huang, Jia-Bin and Wang, Oliver and Shechtman, Eli and Yumer, Ersin and Yang, Ming-Hsuan},
title = {Learning Blind Video Temporal Consistency},
booktitle = {European Conference on Computer Vision},
year = {2018}
}
- Pytorch 0.4
- TensorboardX
- LPIPS (for evaluation)
Download repository:
$ git clone https://github.com/phoenix104104/fast_blind_video_consistency.git
We use the following algorithms to obtain per-frame processed results:
Style transfer