Skip to content

TPSeNCE for image rain generation, deraining, and object detection.

Notifications You must be signed in to change notification settings

ShenZheng2000/TPSeNCE

Repository files navigation

[WACV 2024] TPSeNCE: Towards Artifact-Free Realistic Rain Generation for Deraining and Object Detection in Rain

Paper | ArXiv | Supp | Slides | Poster

Title

TPSeNCE: Towards Artifact-Free Realistic Rain Generation for Deraining and Object Detection in Rain
Shen Zheng, Changjie Lu, Srinivasa Narasimhan
In WACV 2024

Updates

(12/20: WACV link is available)

(12/17: Update deraining instructions)

(11/28: Upload checkpoints for night and snowy.)

Model Overview

Image Results

Rain Generation (Clear to Rainy)

Deraining (Rainy to Clear)

Object Detection in Rain

Video Results

Rain Generation Video [here]

Object Detection Video [here]

Getting Started

git clone https://github.com/ShenZheng2000/TPSeNCE.git

Dependencies

pip install -r requirements.txt

Dataset Download

Download training and testing images from [here]

Dataset Explanations

Suppose we are translating clear images to rainy images, then we should put images under /path_to_your_dataset/ like this.

A: source images (e.g., clear images)
B: target images (e.g., rainy images)
S: sem. seg. maps of A
T: sem. seg. maps of B

Dataset Folder Structure

/path_to_your_dataset/
    ├── trainA
    ├── trainB
    ├── trainS
    ├── trainT
    ├── testA
    ├── testB
    ├── testS
    ├── testT

NOTE1: Avoiding Empty Test Folders

testS and testT is not used for training or testing. However, make sure to include images in the testS and testT folders to prevent them from being empty, as an empty folder cause error during training and testing.

In convenience, we suggest that you use the following command to avoid empty folder.

cp -r testA testS
cp -r testB testT

NOTE2: Obtaining Semantic Segmentation Maps

As ground truth semantic segmentation maps are not available for BDD100K, we estimate these maps using the [ConvNeXt-XL] model from the [MMSegmentation] toolbox. If you are working with a dataset like [Cityscapes] which already includes ground truth semantic segmentation maps, the semantic guidance can be expected to be more effective.

Training from scratch

Run in terminal.

bash train.sh

Testing with pretrained model

  1. Download the checkpoints from [here]

  2. Unzip the checkpoints.

  3. Create folder bdd100k_1_20, INIT, and boreas_snowy under ./checkpoints like below.

/TPSeNCE/
    ├── checkpoints
    │   ├── bdd100k_1_20                       (clear2rainy)
    │   ├── INIT                               (clear2rainy)
    |   ├── boreas_snowy                       (clearsnowy)
    |   ├── bdd100k_7_19_night_tri_sem         (day2night)
    |   ├── bdd100k_7_20_snowy_tri_sem         (clear2snowy)
  1. Run in terminal
bash test.sh

Deraining Experiments

  1. Choose one of the deraining methods below: (1)EffDerain (2)VRGNet (3)PreNet (4)SAPNet

  2. Use TPSeNCE with checkpoint bdd100k_1_20 for testing (clear -> rainy)

  3. Use deraining methods for training (rainy -> clear)

  4. Extract Rainy_bad.zip to obtain Rainy_bad

  5. Perform inference on deraining methods for 100 heavy rain images inside Rainy_bad, or any other real rainy images you prefer.

Citation

If you find this work helpful, please cite

@InProceedings{Zheng_2024_WACV,
    author    = {Zheng, Shen and Lu, Changjie and Narasimhan, Srinivasa G.},
    title     = {TPSeNCE: Towards Artifact-Free Realistic Rain Generation for Deraining and Object Detection in Rain},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2024},
    pages     = {5394-5403}
}

Acknowledgment

This repository is heavily based upon [MoNCE] and [CUT].

This work is supported in part by General Motors.

About

TPSeNCE for image rain generation, deraining, and object detection.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published