Vincent Gripon's Homepage

Research and Teaching Blog

Rethinking Weight Decay For Efficient Neural Network Pruning

H. Tessier, V. Gripon, M. Léonardon, M. Arzel, T. Hannagan and D. Bertrand, "Rethinking Weight Decay For Efficient Neural Network Pruning," in Journal of Imaging, Volume 8, Number 3, March 2022.

Introduced in the late 80's for generalization purposes, pruning has now become a staple to compress deep neural networks. Despite many innovations brought in the last decades, pruning approaches still face core issues that hinder their performance or scalability. Drawing inspiration from early work in the field, and especially the use of weight-decay to achieve sparsity, we introduce Selective Weight Decay (SWD), which realizes efficient continuous pruning throughout training. Our approach, theoretically-grounded on Lagrangian Smoothing, is versatile and can be applied to multiple tasks, networks and pruning structures. We show that SWD compares favorably to state-of-the-art approaches in terms of performance/parameters ratio on the CIFAR-10, Cora and ImageNet ILSVRC2012 datasets.

Download manuscript.

Bibtex
@article{TesGriLéArzHanBer20223,
  author = {Hugo Tessier and Vincent Gripon and
Mathieu Léonardon and Matthieu Arzel and Thomas
Hannagan and David Bertrand},
  title = {Rethinking Weight Decay For Efficient
Neural Network Pruning},
  journal = {Journal of Imaging},
  year = {2022},
  volume = {8},
  number = {3},
  month = {March},
}




You are the 1976194th visitor

Vincent Gripon's Homepage