Vincent Gripon's Homepage

Research and Teaching Blog

Rethinking Weight Decay For Efficient Neural Network Pruning

H. Tessier, V. Gripon, M. Léonardon, M. Arzel, T. Hannagan and D. Bertrand, "Rethinking Weight Decay For Efficient Neural Network Pruning," in ArXiv Preprint, 2020.

Introduced in the late 80's for generalization purposes, pruning has now become a staple to compress deep neural networks. Despite many innovations brought in the last decades, pruning approaches still face core issues that hinder their performance or scalability. Drawing inspiration from early work in the field, and especially the use of weight-decay to achieve sparsity, we introduce Selective Weight Decay (SWD), which realizes efficient continuous pruning throughout training. Our approach, theoretically-grounded on Lagrangian Smoothing, is versatile and can be applied to multiple tasks, networks and pruning structures. We show that SWD compares favorably to state-of-the-art approaches in terms of performance/parameters ratio on the CIFAR-10, Cora and ImageNet ILSVRC2012 datasets.

Download manuscript.

Bibtex
@inproceedings{TesGriLéArzHanBer2020,
  author = {Hugo Tessier and Vincent Gripon and
Mathieu Léonardon and Matthieu Arzel and Thomas
Hannagan and David Bertrand},
  title = {Rethinking Weight Decay For Efficient
Neural Network Pruning},
  booktitle = {ArXiv Preprint},
  year = {2020},
}




You are the 1357747th visitor

Vincent Gripon's Homepage