
Learning deep representations to solve complex machine learning tasks has become the prominent trend in the past few years. Indeed, Deep Neural Networks are now the golden standard in domains as various as computer vision, natural language processing or even playing combinatorial games. However, problematic limitations are hidden behind this surprising universal capability. Among other things, explainability of the decisions is a major concern, especially since deep neural networks are made up of a very large number of trainable parameters. Moreover, computational complexity can quickly become a problem, especially in contexts constrained by real time or limited resources. Therefore, understanding how information is stored and the impact this storage can have on the system remains a major and open issue. In this chapter, we introduce a method to transform deep neural network models into deep associative memories, with simpler, more explicable and less expensive operations. We show through experiments that these transformations can be done without penalty on predictive performance. The resulting deep associative memories are excellent candidates for artificial intelligence that is easier to theorize and manipulate.
Download manuscript.
Bibtex@book{GriLasHac2020,
author = {Vincent Gripon and Carlos Lassance and
Ghouthi Boukli Hacene},
editor = {ArXiv Preprint},
title = {DecisiveNets: Training Deep Associative
Memories to Solve Complex Machine Learning Problems},
year = {2020},
}

Download manuscript.
Bibtex@phdthesis{Gri202012,
author = {Vincent Gripon},
title = {Efficient Representations for Graph and
Neural Network Signals},
school = {ENS Lyon},
year = {2020},
month = {December},
}

B. Pasdeloup, V. Gripon, R. Alami and M. Rabbat, "Uncertainty Principle on Graphs," L. Stankovic and E. Sejdic, VertexFrequency Analysis of Graph Signals, pp. 317340, April 2019.
Download manuscript.
Bibtex@inbook{PasGriAlaRab20194,
author = {Bastien Pasdeloup and Vincent Gripon and
Réda Alami and Michael Rabbat},
editor = {L. Stankovic and E. Sejdic},
title = {Uncertainty Principle on Graphs},
pages = {317340},
publisher = {Springer Nature},
year = {2019},
series = {VertexFrequency Analysis of Graph
Signals},
month = {April},
}

We know much about the neuron, the fundamental brain component. But we know almost nothing about mental information. On what kind of support does the brain memorize known faces, poems or phone numbers? How does it return those? Neurobiologists and neuroanatomists are unable to clarify those purely informational questions. If it is necessary to understand the neuron functioning principles, it seems not to be sufficient in order to answer the speculative question of mental information. Other concepts, coming from scientific domains foreign to biology such as information theory and redundant coding, may help finding adequate answers. This work brings a first concrete idea, mathematically justified and biologically plausible, on the way the neural network sets and retrieve its knowledge items. This novel theory mixes neurons and graphs, error correcting codes and cortical columns, stationnary messages and sequences and finally neural cliques and tournaments. Development prospects offered by this theory and by the fully digital brain memory model are many and promising, in neuroscience as well as in artificial intelligence.
This book is currently only available in french.
Bibtex@book{BerGri201209,
author = {Claude Berrou and Vincent Gripon},
editor = {Odile Jacob},
title = {Petite mathématique du cerveau},
year = {2012},
month = {September},
}


You are the 1643271th visitor
