Skip to content

wsxtyrdd/Masked-Transformer-For-Image-Compression

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This repo is an implementation for M2T: Masking Transformers Twice for Faster Decoding in pytorch.

Install

The latest codes are tested on Ubuntu18.04LTS, CUDA11.7, PyTorch1.9 and Python 3.7. Some libraries are required to run the codes in this repo, including constriction, compressai, and timm.

Train

python train.py --config config/mt.yaml # --wandb (if you want to use wandb)

Model checkpoints and logs will be saved in ./history/MT.

Test

python train.py --config config/mt.yaml --test-only --eval-dataset-path: 'path_to_kodak'

Performance

Red dot is our reproduction with distortion lambda $\lambda=0.0035$.

Pretrained models

To be released.

Acknowledgements

We use constriction for actual entropy coding. Thanks for Fabian Mentzer's help for the clarification of the details of the paper.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages