Vincent Gripon's Homepage

Research and Teaching Blog

Books

2020

V. Gripon, C. Lassance and G. B. Hacene, "DecisiveNets: Training Deep Associative Memories to Solve Complex Machine Learning Problems," ArXiv Preprint, 2020. Manuscript.

2019

B. Pasdeloup, V. Gripon, R. Alami and M. Rabbat, "Uncertainty Principle on Graphs," L. Stankovic and E. Sejdic, Vertex-Frequency Analysis of Graph Signals, pp. 317--340, April 2019. Manuscript.

2012

C. Berrou and V. Gripon, "Petite mathématique du cerveau," Odile Jacob, September 2012.

DecisiveNets: Training Deep Associative Memories to Solve Complex Machine Learning Problems

V. Gripon, C. Lassance and G. B. Hacene, "DecisiveNets: Training Deep Associative Memories to Solve Complex Machine Learning Problems," ArXiv Preprint, 2020.

Learning deep representations to solve complex machine learning tasks has become the prominent trend in the past few years. Indeed, Deep Neural Networks are now the golden standard in domains as various as computer vision, natural language processing or even playing combinatorial games. However, problematic limitations are hidden behind this surprising universal capability. Among other things, explainability of the decisions is a major concern, especially since deep neural networks are made up of a very large number of trainable parameters. Moreover, computational complexity can quickly become a problem, especially in contexts constrained by real time or limited resources. Therefore, understanding how information is stored and the impact this storage can have on the system remains a major and open issue. In this chapter, we introduce a method to transform deep neural network models into deep associative memories, with simpler, more explicable and less expensive operations. We show through experiments that these transformations can be done without penalty on predictive performance. The resulting deep associative memories are excellent candidates for artificial intelligence that is easier to theorize and manipulate.

Download manuscript.

Bibtex
@book{GriLasHac2020,
  author = {Vincent Gripon and Carlos Lassance and
Ghouthi Boukli Hacene},
  editor = {ArXiv Preprint},
  title = {DecisiveNets: Training Deep Associative
Memories to Solve Complex Machine Learning Problems},
  year = {2020},
}

Efficient Representations for Graph and Neural Network Signals

Download manuscript.

Bibtex
@phdthesis{Gri202012,
  author = {Vincent Gripon},
  title = {Efficient Representations for Graph and
Neural Network Signals},
  school = {ENS Lyon},
  year = {2020},
  month = {December},
}

Uncertainty Principle on Graphs

B. Pasdeloup, V. Gripon, R. Alami and M. Rabbat, "Uncertainty Principle on Graphs," L. Stankovic and E. Sejdic, Vertex-Frequency Analysis of Graph Signals, pp. 317--340, April 2019.

Download manuscript.

Bibtex
@inbook{PasGriAlaRab20194,
  author = {Bastien Pasdeloup and Vincent Gripon and
Réda Alami and Michael Rabbat},
  editor = {L. Stankovic and E. Sejdic},
  title = {Uncertainty Principle on Graphs},
  pages = {317--340},
  publisher = {Springer Nature},
  year = {2019},
  series = {Vertex-Frequency Analysis of Graph
Signals},
  month = {April},
}

Petite mathématique du cerveau

C. Berrou and V. Gripon, "Petite mathématique du cerveau," Odile Jacob, September 2012.

We know much about the neuron, the fundamental brain component. But we know almost nothing about mental information. On what kind of support does the brain memorize known faces, poems or phone numbers? How does it return those? Neurobiologists and neuroanatomists are unable to clarify those purely informational questions. If it is necessary to understand the neuron functioning principles, it seems not to be sufficient in order to answer the speculative question of mental information. Other concepts, coming from scientific domains foreign to biology such as information theory and redundant coding, may help finding adequate answers. This work brings a first concrete idea, mathematically justified and biologically plausible, on the way the neural network sets and retrieve its knowledge items. This novel theory mixes neurons and graphs, error correcting codes and cortical columns, stationnary messages and sequences and finally neural cliques and tournaments. Development prospects offered by this theory and by the fully digital brain memory model are many and promising, in neuroscience as well as in artificial intelligence.

This book is currently only available in french.


Bibtex
@book{BerGri201209,
  author = {Claude Berrou and Vincent Gripon},
  editor = {Odile Jacob},
  title = {Petite mathématique du cerveau},
  year = {2012},
  month = {September},
}




You are the 2115793th visitor

Vincent Gripon's Homepage