Skip to content

media-comp/2022-Flooding

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

2022-Flooding

This is tensorflow implementation of "Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, and Masashi Sugiyama. 2020. Do we need zero training loss after achieving zero training error? In Proceedings of the 37th International Conference on Machine Learning (ICML'20). JMLR.org, Article 428, 4604–4614"

Scripts for MNIST, Fashion-MNIST and KMNIST are present in the models folder. This work can be extended in numerous ways namely "Implementing Flooding algorithm for Deep CNNs" or "Training while varying Flooding constant".

Algorithm

Proposed Objective Function:

equation
where, b = Flooding constant

Results

Following figures represent the performance of MLP Classifier for the Fashion-MNIST dataset when trained with and without Flooding. We notice that while for the model trained without Flooding the testing loss diverges away but, by using Flooding the testing loss stablizes.

Dataset Comparing Accuracies Comparing Losses
Fashion - MNIST
CIFAR - 10

Requirements

  • keras==2.8.0
  • matplotlib==3.5.2
  • numpy==1.22.3
  • tensorflow==2.8.0
  • protobuf==3.20.1

Demo

Run the following command for a demo of Flooding on MNIST dataset

python demo.py

Run the following command for a demo of Adaptive Flooding on MNIST dataset

python demo_adaptive.py

References

  • Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, and Masashi Sugiyama. 2020. Do we need zero training loss after achieving zero training error? In Proceedings of the 37th International Conference on Machine Learning (ICML'20). JMLR.org, Article 428, 4604–4614
  • https://www.tensorflow.org/

About

Implementation of Flooding method in tensorflow

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published