Skip to content

indigo-99/FuRPE

Repository files navigation

FuRPE

Official code of FuRPE: Learning Full-body Reconstruction from Part Experts.

Paper url: https://arxiv.org/pdf/2212.00731.pdf

image

Full-body reconstruction is a fundamental but challenging task. Owing to the lack of annotated data, the performances of existing methods are largely limited. In this paper, we propose a novel method named Full-body Reconstruction from Part Experts (FuRPE) to tackle this issue. In FuRPE, the network is trained using pseudo labels and features generated from part-experts. An simple yet effective pseudo ground-truth selection scheme is proposed to extract high-quality pseudo labels. In this way, a large-scale of existing human body reconstruction datasets can be leveraged and contribute to the model training. In addition, an exponential moving average training strategy is introduced to train the network in a self-supervised manner, further boosting the performance of the model. Extensive experiments on several widely used datasets demonstrate the effectiveness of our method over the baseline. Our method achieves the state-of-the-art performance. Code will be publicly available for further research.

The code is on the basis of ExPose. (https://github.com/vchoutas/expose)

USER GUIDANCE

Training

We provide two training methods: the standard version [mytrain.py], and the multi-loss version [mytrain_multiloss.py].

  1. [mytrain.py] command (linux) :
    CUDA_VISIBLE_DEVICES=GPU_ID(eg: 0/1) nohup python mytrain.py --exp-cfg=data/config.yaml >log.train 2>&1 &

    • You can set whether to use feature distilization of 3 part (body, hand, and face) or not in the top of the code.
    • --exp-cfg sets the configurations of the model, the important ones included are listed below:
      • the training epoches
      • batch_size
      • saving path and frequency for checkpoints
      • the weights of each composition of the training loss
      • etc.
    • The training log will be recorded in log.train.
  2. [mytrain_multiloss.py] command (linux) :
    CUDA_VISIBLE_DEVICES=GPU_ID(eg: 0/1) nohup python mytrain_multiloss.py --exp-cfg=data/config.yaml >log.train 2>&1 &

  3. [mytrain_ema.py] command (linux) :
    CUDA_VISIBLE_DEVICES=GPU_ID(eg: 0/1) nohup python mytrain_ema.py --exp-cfg=data/config.yaml >log.train 2>&1 &

Evaluation

We provide evaluation on EHF whole-body indoor dataset and another outdoor 3DPW testset (only body labels).
In addition, part specific evaluation on the hand and head sub-networks are also provided.

  1. Whole-body evaluation on EHF:
    CUDA_VISIBLE_DEVICES=GPU_ID(eg: 0/1) python inference.py --exp-cfg=data/config.yaml --datasets=ehf --show=False --output-folder eval_output --save-mesh=False --save-params=False --save-vis=False

  2. Body evaluation on 3DPW testset:
    CUDA_VISIBLE_DEVICES=GPU_ID(eg: 0/1) python inference.py --exp-cfg=data/config.yaml --datasets=threedpw --show=False --output-folder eval_output --save-mesh=False --save-params=False --save-vis=False
    Before running the commond, the code of expose/evaluation.py need to be changed in line 723~729 to change the J_regressor from SMPL-X to SMPL because 3DPW only contains ground truth in SMPL formats.

  3. Hand evaluation on FREIHAND testset:
    CUDA_VISIBLE_DEVICES=GPU_ID(eg: 0/1) python inference_freihand.py --exp-cfg=data/config.yaml --datasets=ehf --show=False --output-folder eval_output --save-mesh=False --save-params=False --save-vis=False

    cd /data/panyuqing/freihand

    python eval.py /data/panyuqing/freihand/evaluation /data/panyuqing/freihand/evaluation/output
    The freihand code can be referenced and installed according to: https://github.com/lmb-freiburg/freihand

  4. Head evaluation on NoW testset:
    CUDA_VISIBLE_DEVICES=GPU_ID(eg: 0/1) python inference_nowface.py --exp-cfg=data/config.yaml --datasets=ehf --show=False --output-folder eval_output --save-mesh=False --save-params=False --save-vis=False
    The dataset authors also don't publish the ground truth of the testset, so the code only generate predicted results. If evaluation metrics are needed, you can submit the result to their mailbox. (https://ringnet.is.tue.mpg.de/download.php)

Demo

The demo videos can be generated by running:
CUDA_VISIBLE_DEVICES=GPU_ID(eg: 0/1) nohup python mydemo.py --image-folder test_video_or_imagedir_path --exp-cfg=data/config.yaml --output-folder output_demo --save-vis=True >log.mydemo 2>&1 &
You can set the demo's input videos or image folders by --image-folder, and the output one by --output-folder.
The config.yaml is only used to locate the checkpoint path.

MODEL DOWNLOAD

URL: https://pan.baidu.com/s/1TCP-UFrUwsYHhJ0oHYCNlg

pwd: pupb

EXPERT SYSTEM

Our methods can use the expert system to generate pseudo labeled data of whole-body pose (body, hand, head pose, shape, expression), which can be used in model training to improve model's performance.
The expert system consists of 3 experts:

  1. The body expert: SPIN (https://github.com/nkolot/SPIN)
  2. The hand expert: FrankMocap (https://github.com/facebookresearch/frankmocap)
  3. The head expert: DECA (https://github.com/YadiraF/DECA)
    We use the models of the 3 experts published by their authors to generate part labels, then integrate them to get whole-body labels with 3D vertices for training.

Before sending images to the expert system, we use mediapipe, an open source human part detection model, to crop body, hand, and head images. (https://google.github.io/mediapipe/)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages