Skip to content

Commit

Permalink
Merge branch 'master' of github.com:phoenix104104/fast_blind_video_co…
Browse files Browse the repository at this point in the history
…nsistency
  • Loading branch information
phoenix104104 committed Jul 31, 2018
2 parents f150be8 + 1e5be0b commit 6973063
Showing 1 changed file with 65 additions and 1 deletion.
66 changes: 65 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Learning Blind Video Temporal Consistency (ECCV 2018)
# Learning Blind Video Temporal Consistency

[Wei-Sheng Lai](http://graduatestudents.ucmerced.edu/wlai24/),
[Jia-Bin Huang](https://filebox.ece.vt.edu/~jbhuang/),
Expand All @@ -21,6 +21,7 @@ and [Ming-Hsuan Yang](http://faculty.ucmerced.edu/mhyang/)
1. [Dataset](#dataset)
1. [Apply Pre-trained Models](#apply-pre-trained-models)
1. [Training and Testing](#training-and-testing)
1. [Evaluation](#evaluation)
1. [Image Processing Algorithms](#image-processing-algorithms)

### Introduction
Expand Down Expand Up @@ -61,6 +62,7 @@ Download our training and testing datasets:

cd data
./download_data.sh [train | test | all]
cd ..

For example, download training data only:

Expand All @@ -74,14 +76,76 @@ You can also download the results of [Bonneel et al. 2015] and our approach:

./download_data.sh results


### Apply pre-trained models
Download pretrained models (including FlowNet2 and our model):

cd pretrained_models
./download_models.sh
cd ..

Test pre-trained model:

python test_pretrained.py -dataset DAVIS -task WCT/wave

The output frames are saved in `data/test/ECCV18/WCT/wave/DAVIS`.

### Training and testing
Train a new model:

python train.py -datasets_tasks W3_D1_C1_I1

We have specified all the default parameters in train.py. `lists/train_tasks_W3_D1_C1_I1.txt` specifies the dataset-task pairs for training.

Test a model:

python test.py -method MODEL_NAME -epoch N -dataset DAVIS -task WCT/wave

Check the checkpoint folder for the `MODEL_NAME`.
The output frames are saved in `data/test/MODEL_NAME/epoch_N/WCT/wave/DAVIS`.


You can also generate results for multiple tasks using the following script:

python batch_test.py -method output/MODEL_NAME/epoch_N

which will test all the tasks in `lists/test_tasks.txt`.


### Evaluation
**Temporal Warping Error**

To compute the temporal warping error, we first need to generate optical flow and occlusion masks:

python compute_flow_occlusion.py -dataset DAVIS -phase test

The flow will be stored in `data/test/fw_flow/DAVIS`. The occlusion masks will be stored in `data/test/fw_occlusion/DAVIS`.

Then, run the evaluation script:

python evaluate_WarpError.py -method output/MODEL_NAME/epoch_N -task WCT/wave

**LPIPS**

Download [LPIPS repository](https://github.com/richzhang/PerceptualSimilarity) and change `LPIPS_dir` in evalate_LPIPS.py if necesary (default path is `../LPIPS`).

Run the evaluation script:

python evaluate_LPIPS.py -method output/MODEL_NAME/epoch_N -task WCT/wave

**Batch evaluation**

You can evaluate multiple tasks using the following script:

python batch_evaluate.py -method output/MODEL_NAME/epoch_N -metric LPIPS
python batch_evaluate.py -method output/MODEL_NAME/epoch_N -metric WarpError

### Test on new videos
To test our model on new videos or applications, please follow the folder structure in `./data`.

Given a video, we extract frames named as `%05d.jpg` and save frames in `data/test/input/DATASET/VIDEO`.

The per-frame processed video is stored in `data/test/processed/TASK/DATASET/VIDEO`, where `TASK` is the image processing algorithm applied on the original video.


### Image Processing Algorithms
Expand Down

0 comments on commit 6973063

Please sign in to comment.