Assembly Output Codes for Learning Neural Networks
P. Tigréat, C. R. K. Lassance, X. Jiang, V. Gripon and C. Berrou, "Assembly Output Codes for Learning Neural Networks," in Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 285--289, September 2016.
Neural network-based classifiers usually encode the class labels of input data via a completely disjoint code, i.e. a binary vector with only one bit associated with each category. We use coding theory to propose assembly codes where each element is associated with several classes, making for better target vectors. These codes emulate the combination of several classifiers, which is a well-known method to improve decision accuracy. Our experiments on data-sets such as MNIST with a multi-layer neural network show that assembly output codes, which are characterized by a higher minimum Hamming distance, result in better classification performance. These codes are also well suited to the use of clustered clique-based networks in category representation.
Download manuscript.
Bibtex@inproceedings{TigLasJiaGriBer20169,
author = {Philippe Tigréat and Carlos Rosar Kos
Lassance and Xiaoran Jiang and Vincent Gripon and
Claude Berrou},
title = {Assembly Output Codes for Learning Neural
Networks},
booktitle = {Proceedings of the 9th International
Symposium on Turbo Codes and Iterative Information
Processing},
year = {2016},
pages = {285--289},
month = {September},
}