Site de Vincent Gripon

Blog sur mes recherches et mon enseignement

Rethinking Weight Decay For Efficient Neural Network Pruning

H. T. V. G. M. L. M. A. T. Hannagan et D. Bertrand, "Rethinking Weight Decay For Efficient Neural Network Pruning," dans ArXiv Preprint, 2020.

Introduced in the late 80's for generalization purposes, pruning has now become a staple to compress deep neural networks. Despite many innovations brought in the last decades, pruning approaches still face core issues that hinder their performance or scalability. Drawing inspiration from early work in the field, and especially the use of weight-decay to achieve sparsity, we introduce Selective Weight Decay (SWD), which realizes efficient continuous pruning throughout training. Our approach, theoretically-grounded on Lagrangian Smoothing, is versatile and can be applied to multiple tasks, networks and pruning structures. We show that SWD compares favorably to state-of-the-art approaches in terms of performance/parameters ratio on the CIFAR-10, Cora and ImageNet ILSVRC2012 datasets.

Télécharger le manuscrit.

Bibtex
@inproceedings{HanBer2020,
  author = {Hugo Tessier, Vincent Gripon, Mathieu
Léonardon, Matthieu Arzel, Thomas Hannagan and David
Bertrand},
  title = {Rethinking Weight Decay For Efficient
Neural Network Pruning},
  booktitle = {ArXiv Preprint},
  year = {2020},
}




Vous êtes le 1253428ème visiteur

Site de Vincent Gripon