Compression of Deep Neural Networks on the Fly
Thanks to their state-of-the-art performance, deep neural networks are increasingly used for object recognition. To achieve the best results, they use millions of parameters to be trained. However, when targetting embedded applications the size of these models becomes problematic. As a consequence, their usage on smartphones or other resource limited devices is prohibited. In this paper we introduce a novel compression method for deep neural networks that is performed during the learning phase. It consists in adding an extra regularization term to the cost function of fully-connected layers. We combine this method with Product Quantization (PQ) of the trained weights for higher savings in storage consumption. We evaluate our method on two data sets (MNIST and CIFAR10), on which we achieve significantly larger compression rates than state-of-the-art methods.
Download manuscript.
Bibtex@article{SouGriRob20169,
author = {Guillaume Soulié and Vincent Gripon and
Maëlys Robert},
title = {Compression of Deep Neural Networks on the
Fly},
journal = {Lecture Notes in Computer Science},
year = {2016},
volume = {9887},
pages = {153--170},
month = {September},
}
|
|
You are the 2092879th visitor
|