Skip to content

Latest commit

 

History

History

BRFL

BRFL: a Benchmark for Robust Federated Learning

This folder contains the datasets and data pipeline for the paper Test-Time Robust Personalization for Federated Learning (ICLR 2023) by Liangze Jiang and Tao Lin, separated from the main codebase for better reuse.

The aim of this benchmark is to properly evaluate the ID performance and OOD robustness of Federated Learning algorithms during test-time (deployment). To this end, diverse distribution shifts are taken into consideration, including common corruptions, label distribution shift, natural distribution shift, and a mixture of ID / OOD test.

How it works

The benchmark currently contains CIFAR-10 and ImageNet32 (downsampled ImageNet). You can use run.py to obtain the Dataset and Dataloader of them.

CIFAR10

  1. The CIFAR-10 Train & Test sets are merged and then split into K non-i.i.d pieces by Dirichlet distribution.
  2. Each of the K pieces is a client's local data, and is uniformly and randomly partitioned into local train, val, and test sets.
  3. Corrupted test set is obtained by randomly applying a corruption to each local test samples.
  4. Out-of-Client test set is obtained by randomly sampling other clients' test set, mimicing the label distribution shift.
  5. Natural shift test set is obtained by splitting CIFAR10.1 to each client according to their local label distributions.
  6. Finally, Mixture of tests is obtained by randomly sampling the above ID/OOD test sets.

ImageNet32

  1. The data pipeline of ImageNet32 follows the same procedure as CIFAR-10, except that ImageNet-A / -R and -V2 are considered as OOD test sets (check run.py for more details).

Visualizations

We provide notebooks visualization_cifar10.ipynb and visualization_imagenet.ipynb to visualize and explore the local data of each client.

CIFAR10 (non-i.i.d alpha 0.1)

ImageNet32 (non-i.i.d alpha 0.01)

Requirements

  • See extra_requirements.sh
  • For ImageNet32, please first download (registration required) and extract the train & val dataset to ./imagenet32/imagenet32/

Citation

If you find this useful in your research, please consider citing:

@inproceedings{jiang2023test,
  title={Test-Time Robust Personalization for Federated Learning},
  author={Jiang, Liangze and Lin, Tao},
  booktitle = {International Conference on Learning Representations (ICLR)},
  year={2023}
}