Skip to content

This is the implementation for Local Adversarial Disentangling Network for Facial Makeup and De-Makeup

Notifications You must be signed in to change notification settings

wangguanzhi/LADN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python 3.6

LADN: Local Adversarial Disentangling Network for Facial Makeup and De-Makeup

[Project Page][Paper]

Pytorch implementation of our network, LADN for makeup transfer and removal. LADN achieve not only state-of-the-art results on conventional styles but also novel results involving complex and dramatic styles with high-frequency details covering large areas across multiple facial features. We also collect a dataset containing unpaired images of before- and after-makeup faces.

Contact: Qiao Gu ([email protected]) and Guanzhi Wang ([email protected])

Synthetic Ground Truth Generation

The makeup transfer pipeline with no deep learning components is updated here.

Paper

LADN: Local Adversarial Disentangling Network for Facial Makeup and De-Makeup
Qiao Gu*, Guanzhi Wang*, Mang Tik Chiu, Yu-Wing Tai, Chi-Keung Tang
arXiv preprint arXiv:1904.11272 (*Equal contribution. Authorship order was determined by rolling dice.)

Please cite our paper if you find the code or dataset useful for your research.

@inproceedings{gu2019ladn,
  title={Ladn: Local adversarial disentangling network for facial makeup and de-makeup},
  author={Gu, Qiao and Wang, Guanzhi and Chiu, Mang Tik and Tai, Yu-Wing and Tang, Chi-Keung},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  pages={10481--10490},
  year={2019}
}

Usage

Install

  • Clone this repo.
git clone https://github.com/wangguanzhi/LADN.git

Install required packages

We recommend installing the required package using Anaconda.

cd LADN
conda create -n makeup-train python=3.6
source activate makeup-train

Please install PyTorch according to your hardware configuration. (This implementation has been tested on Ubuntu 16.04, CUDA 9.0 and CuDNN 7.5) Then install the following packages.

conda install requests
conda install -c conda-forge tensorboardx

Download makeup dataset

  • We release a dataset containing unpaired images before- and after-makeup faces, together with the synthetic ground truth.
  • Our code uses Face++ Detection API for facial landmarks, and the downloaded dataset includes the facial landmarks of the dataset images.

Please download the zipped dataset from Google Drive, put it in the LADN/datasets/ and unzip it.

Training

  • Please change CUDA_VISIBLE_DEVICES and --name makeup accordingly. If the memory of one GPU is not enough for the training, please set the --backup_gpu to another available GPU ID.
  • The pre-detected landmarks is included in the provided dataset as a pickle file. It is loaded and used for training by default option.

Scripts for activate required venv and initiate a standard training.

cd src
source activate makeup-train
CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_VISIBLE_DEVICES=0,1 python3 run.py --backup_gpu 1 --dataroot ../datasets/makeup --name makeup --resize_size 576 --crop_size 512 --local_style_dis --n_local 12 --local_laplacian_loss --local_smooth_loss

Download and run pre-trained models

  • We release two pre-trained models (light.pth and extreme.pth) for your reference.
  • light.pth performs better on light/conventional makeup styles.
  • extreme.pth performs better on extreme/highly dramatic makeup styles.

Please download the pre-trained model file and put it in model folder. and run the following command to test the model.

For light.pth

CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_VISIBLE_DEVICES=0,1 python3 run.py --backup_gpu 1 --dataroot ../datasets/makeup --name makeup_test --resize_size 576 --crop_size 512 --local_style_dis --n_local 12 --phase test --test_forward --test_random --result_dir ../results --test_size 300 --resume ../models/light.pth --no_extreme

For extreme.pth

CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_VISIBLE_DEVICES=0,1 python3 run.py --backup_gpu 1 --dataroot ../datasets/makeup --name makeup_test --resize_size 576 --crop_size 512 --local_style_dis --n_local 12 --phase test --test_forward --test_random --result_dir ../results --test_size 300 --resume ../models/extreme.pth --extreme_only

Acknowledgement

Our code is inspired by DRIT and DeepHDR

About

This is the implementation for Local Adversarial Disentangling Network for Facial Makeup and De-Makeup

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published