Vincent Gripon's Homepage

Research and Teaching Blog

Conference Proceedings

2017

V. Gripon, "Tropical Graph Signal Processing," in Proceedings of the Asilomar conference, October 2017. To appear. Manuscript.

T. Stérin, N. Farrugia and V. Gripon, "An Intrinsic Difference Between Vanilla RNNs and GRU Models," in Proceedings of Cognitive, pp. 76--81, February 2017. Manuscript.

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel and M. Jezequel, "Finding All Matches in a Database using Binary Neural Networks," in Proceedings of Cognitive, pp. 59--64, February 2017. Manuscript.

E. Coyac, V. Gripon, C. Langlais and C. Berrou, "Performance of Neural Clique Networks Subject to Synaptic Noise," in Proceedings of Cognitive, pp. 4--9, February 2017. Manuscript.

2016

N. Grelier, B. Pasdeloup, J. Vialatte and V. Gripon, "Neighborhood-Preserving Translations on Graphs," in Proceedings of GlobalSIP, pp. 410--414, October 2016.

P. Tigréat, C. R. K. Lassance, X. Jiang, V. Gripon and C. Berrou, "Assembly Output Codes for Learning Neural Networks," in Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 285--289, September 2016. Manuscript.

A. Aboudib, V. Gripon and G. Coppin, "A Turbo-Inspired Iterative Approach for Correspondence Problems of Image Features," in Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 226--230, September 2016. Manuscript.

E. Coyac, V. Gripon, C. Langlais and C. Berrou, "Distributed Coding and Synaptic Pruning," in Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 206--210, September 2016. Manuscript.

D. Ferro, V. Gripon and X. Jiang, "Nearest Neighbour Search Using Binary Neural Networks," in Proceedings of IJCNN, pp. 5106--5112, July 2016. Manuscript.

2015

E. Coyac, V. Gripon and C. Langlais, "Impact du bruit synaptique sur les performances des réseaux de cliques neurales," in Proceedings of the GRETSI conference, 2015. Manuscript.

R. Danilo, V. Gripon, P. Coussy and L. Conde-Canencia, "Réseaux de Clusters de Neurones Restreints," in Proceedings of the GRETSI conference, 2015. Manuscript.

B. Pasdeloup, V. Gripon, G. Mercier and D. Pastor, "Vers une caractérisation de la courbe d'incertitude pour des graphes portant des signaux," in Proceedings of the GRETSI conference, 2015. Manuscript.

A. Mheich, M. Hassan, F. Wendling, M. Khalil, O. Dufor, V. Gripon and C. Berrou, "SimNet: A new algorithm for measuring brain networks similarity," in Proceedings of the ICABME international conference, pp. 119--122, 2015. Manuscript.

B. Pasdeloup, M. Rabbat, V. Gripon, D. Pastor and G. Mercier, "Graph Reconstruction from the Observation of Diffused Signals," in Proceedings of the 53rd Allerton Conference, pp. 1386--1390, October 2015. Manuscript.

R. Danilo, V. Gripon, P. Coussy, L. Conde-Canencia and W. J. Gross, "Restricted Clustered Neural Network for Storing Real Data," in proceedings of GLSVLSI conference, pp. 205--210, May 2015. Manuscript.

R. Danilo, H. Jarollahi, V. Gripon, P. Coussy, L. Conde-Canencia and W. J. Gross, "Algorithm and Implementation of an Associative Memory for Oriented Edge Detection Using Improved Clustered Neural Networks," in Proceedings of ISCAS Conference, pp. 2501--2504, May 2015. Manuscript.

A. Aboudib, V. Gripon and G. Coppin, "A Model of Bottom-Up Visual Attention Using Cortical Magnification," in Proceedings of ICASSP, pp. 1493--1497, April 2015. Manuscript.

A. Mheich, M. Hassan, V. Gripon, O. Dufor, M. Khalil, C. Berrou and F. Wendling, "A novel algorithm for measuring graph similarity: application to brain networks," in Proceedings of the IEEE EMBS Neural Engineering Conference, pp. 1068--1071, April 2015. Manuscript.

A. Aboudib, V. Gripon and B. Tessiau, "Implementing Relational-Algebraic Operators for Improving Cognitive Abilities in Networks of Neural Cliques," in Proceedings of Cognitive, pp. 36--41, March 2015. Manuscript.

C. Yu, V. Gripon, X. Jiang and H. Jégou, "Neural Associative Memories as Accelerators for Binary Vector Search," in Proceedings of Cognitive, pp. 85--89, March 2015. Manuscript.

S. Larroque, E. S. Gooya, V. Gripon and D. Pastor, "Using Tags to Improve Diversity of Sparse Associative Memories," in Proceedings of Cognitive, pp. 1--7, March 2015. Manuscript.

E. S. Gooya, D. Pastor and V. Gripon, "Automatic face recognition using SIFT and networks of tagged neural cliques," in Proceedings of Cognitive, pp. 57--61, March 2015. Manuscript.

2014

V. Gripon, V. Skachek and M. Rabbat, "Sparse Binary Matrices as Efficient Associative Memories," in Proceedings of the 52nd Allerton conference, pp. 499-504, October 2014. Manuscript.

H. Jarollahi, N. Onizawa, V. Gripon, T. Hanyu and W. J. Gross, "Algorithm and Architecture for a Multiple-Field Context-Driven Search Engine Using Fully-Parallel Clustered Associative Memories," in Proceedings of SiPS, pp. 1--6, October 2014. Manuscript.

C. Berrou, O. Dufor, V. Gripon and X. Jiang, "Information, Noise, Coding, Modulation: What about the Brain?," in Proceedings of the 8th symposium on Turbo Codes and Iterative Information Processing, pp. 167--172, August 2014. Manuscript.

Z. Yao, V. Gripon and M. Rabbat, "A GPU-based Associative Memory using Sparse Neural Networks," in Proceedings of the PCNN-14 conference, pp. 688--692, July 2014. Manuscript.

B. Boguslawski, V. Gripon, F. Seguin and F. Heitzmann, "Huffman Coding for Storing Non-uniformly Distributed Messages in Networks of Neural Cliques," in proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, volume 1, pp. 262--268, July 2014. Manuscript.

A. Aboudib, V. Gripon and X. Jiang, "A study of retrieval algorithms of sparse messages in networks of neural cliques," in Proceedings of Cognitive 2014, pp. 140--146, May 2014. Manuscript.

M. Rabbat and V. Gripon, "Towards a Spectral Characterization of Signals Supported on Small-World Networks," in ICASSP, pp. 4793--4797, May 2014. Manuscript.

F. Leduc-Primeau, V. Gripon, M. Rabbat and W. Gross, "Cluster-based Associative Memories Built From Unreliable Storage," in ICASSP, pp. 8370--8374, May 2014. Manuscript.

2013

V. Gripon, V. Skachek and M. G. Rabbat, "Sparse Structured Associative Memories as Efficient Set-Membership Data Structures," in Proceedings of the 51st Allerton conference, pp. 500--505, October 2013. Manuscript.

V. Gripon and X. Jiang, "Mémoires associatives pour observations floues," in Proceedings of XXIV-th Gretsi seminar, September 2013. Manuscript.

V. Gripon and M. Rabbat, "Maximum Likelihood Associative Memories," in Proceedings of Information Theory Workshop, pp. 1--5, September 2013. Manuscript.

V. Gripon and M. Rabbat, "Reconstructing a Graph from Path Traces," in Proceedings of International Symposium on Information Theory, pp. 2488-2492, July 2013. Manuscript.

H. Jarollahi, V. Gripon, N. Onizawa and W. J. Gross, "A Low-Power Content-Adressable-Memory Based on Clustered-Sparse-Networks," in Proceedings of 24th International Conference on Application-specific Systems, Architectures and Processors, pp. 642--653, June 2013. Manuscript.

H. Jarollahi, N. Onizawa, V. Gripon and W. J. Gross, "Reduced-complexity binary-weight-coded associative memories," in Proceedings of International Conference on Acoustics, Speech, and Signal Processing, pp. 2523--2527, May 2013. Manuscript.

2012

V. Gripon, M. Rabbat, V. Skachek and W. J. Gross, "Compressing multisets using tries," in Proceedings of Information Theory Workshop, Lausanne, Switzerland, pp. 647--651, September 2012. Manuscript. Presentation.

V. Gripon, V. Skachek, W. J. Gross and M. Rabbat, "Random clique codes," in Proceedings of 7" International Symposium on Turbo Codes and Iterative Information Processing, Gothenburg, Sweden, pp. 121--125, August 2012. Manuscript. Presentation.

X. Jiang, V. Gripon and C. Berrou, "Learning long sequences in binary neural networks," in Proceedings of Cognitive 2012, Nice, France, pp. 165--170, July 2012. Manuscript.

H. Jarollahi, N. Onizawa, V. Gripon and W. J. Gross, "Architecture and Implementation of an Associative Memory Using Sparse Clustered Networks," in Proceedings of IEEE International Symposium on Circuits and Systems, pp. 2901-2904, May 2012. Manuscript. Presentation.

V. Gripon and C. Berrou, "Nearly-optimal associative memories based on distributed constant weight codes," in Proceedings of Information Theory and Applications Workshop, San Diego, CA, USA, pp. 269--273, February 2012. Manuscript. Presentation.

2011

V. Gripon and C. Berrou, "A simple and efficient way to store many messages using neural cliques," in Proceedings of IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain, Paris, France, pp. 54--58, April 2011. Manuscript. Presentation.

2010

C. Berrou and V. Gripon, "Coded Hopfield networks," in Proceedings of 6" International Symposium on Turbo Codes and Iterative Information Processing, Brest, France, pp. 1--5, September 2010. Manuscript.

2009

V. Gripon and O. Serre, "Qualitative Concurrent Stochastic Games with Imperfect Information," in Proceedings of 36th International Colloquium of Automata, Languages and Programming, Springer, Lecture Notes in Computer Science, Rhodes, Greece, pp. 200--211, July 2009. Manuscript.

Tropical Graph Signal Processing

V. Gripon, "Tropical Graph Signal Processing," in Proceedings of the Asilomar conference, October 2017. To appear.

For the past few years, the domain of graph signal processing has extended classical Fourier analysis to domains described by graphs. Most of the results were obtained by analogy with the study of heat propagation. We propose to perform a similar analysis in the context of tropical algebra, widely used in theoretical computer science to monitor propagation processes over graphs of distances. We introduce a Tropical Graph Fourier Transform and prove a few results on graph inference and the existence of a tropical uncertainty principle.

Download manuscript.

Bibtex
@inproceedings{Gri201710,
  author = {Vincent Gripon},
  title = {Tropical Graph Signal Processing},
  booktitle = {Proceedings of the Asilomar
conference},
  year = {2017},
  month = {October},
  note = {To appear},
}

An Intrinsic Difference Between Vanilla RNNs and GRU Models

T. Stérin, N. Farrugia and V. Gripon, "An Intrinsic Difference Between Vanilla RNNs and GRU Models," in Proceedings of Cognitive, pp. 76--81, February 2017.

In order to perform well in practice, Recurrent Neural Networks (RNN) require computationally heavy architectures, such as Gated Recurrent Unit (GRU) or Long Short Term Memory (LSTM). Indeed, the original Vanilla model fails to encapsulate middle and long term sequential dependencies. The aim of this paper is to show that gradient training issues, which have motivated the introduction of LSTM and GRU models, are not sufficient to explain the failure of the simplest RNN. Using the example of Reber’s grammar, we propose an experimental measure of both Vanilla and GRU models, which suggest an intrinsic difference in their dynamics. A better mathematical understanding of this difference could lead to more efficient models without compromising performance.

Download manuscript.

Bibtex
@inproceedings{StFarGri201702,
  author = {Tristan Stérin and Nicolas Farrugia and
Vincent Gripon},
  title = {An Intrinsic Difference Between Vanilla
RNNs and GRU Models},
  booktitle = {Proceedings of Cognitive},
  year = {2017},
  pages = {76--81},
  month = {February},
}

Finding All Matches in a Database using Binary Neural Networks

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel and M. Jezequel, "Finding All Matches in a Database using Binary Neural Networks," in Proceedings of Cognitive, pp. 59--64, February 2017.

The most efficient architectures of associative memories are based on binary neural networks. As example, Sparse Clustered Networks (SCNs) are able to achieve almost optimal memory efficiency while providing robust indexation of pieces of information through cliques in a neural network. In the canonical formulation of the associative memory problem, the unique stored message matching a given input probe is to be retrieved. In this paper, we focus on the more general problem of finding all messages matching the given probe. We consider real datasets from which many different messages can match given probes, which cannot be done with uniformly distributed messages due to their unlikelyhood of sharing large common parts with one another. Namely, we implement a crossword dictionary containing 8-letter english words, and a chess endgame dataset using associative memories based on binary neural networks. We explain how to adapt SCNs’ architecture to this challenging dataset and introduce a backtracking procedure to retrieve all completions of the given input. We stress the performance of the proposed method using different measures and discuss the importance of parameters.

Download manuscript.

Bibtex
@inproceedings{HacGriFarArzJez201702,
  author = {Ghouthi Boukli Hacene and Vincent Gripon
and Nicolas Farrugia and Matthieu Arzel and Michel
Jezequel},
  title = {Finding All Matches in a Database using
Binary Neural Networks},
  booktitle = {Proceedings of Cognitive},
  year = {2017},
  pages = {59--64},
  month = {February},
}

Performance of Neural Clique Networks Subject to Synaptic Noise

E. Coyac, V. Gripon, C. Langlais and C. Berrou, "Performance of Neural Clique Networks Subject to Synaptic Noise," in Proceedings of Cognitive, pp. 4--9, February 2017.

Abstract—Artificial neural networks are so-called because they are supposed to be inspired from the brain and from the ways the neurons work. While some networks are used purely for computational purpose and do not endeavor to be a plausible representation of what happens in the brain, such as deep learning neural networks, others do. However, the question of the noise in the brain and its impact on the functioning of those networks has been little-studied. For example, it is widely known that synapses misfire with a significant probability. We model this noise and study its impact on associative memories powered by neural networks: neural clique networks and Hopfield networks as a reference point. We show that synaptic noise can in fact slightly improve the performance of the decoding process of neural clique networks by avoiding local minima.

Download manuscript.

Bibtex
@inproceedings{CoyGriLanBer201702,
  author = {Eliott Coyac and Vincent Gripon and
Charlotte Langlais and Claude Berrou},
  title = {Performance of Neural Clique Networks
Subject to Synaptic Noise},
  booktitle = {Proceedings of Cognitive},
  year = {2017},
  pages = {4--9},
  month = {February},
}

Neighborhood-Preserving Translations on Graphs

N. Grelier, B. Pasdeloup, J. Vialatte and V. Gripon, "Neighborhood-Preserving Translations on Graphs," in Proceedings of GlobalSIP, pp. 410--414, October 2016.

In many domains (e.g. Internet of Things, neuroimaging) signals are naturally supported on graphs. These graphs usually convey information on similarity between the values taken by the signal at the corresponding vertices. An interest of using graphs is that it allows to define ad hoc operators to perform signal processing. Among them, ones of paramount importance in many tasks are translations. In this paper we propose new definitions of translations on graphs using a few simple properties. Namely we propose to define translations as functions from vertices to adjacent ones, that preserve neighborhood properties of the graph. We show that our definitions, contrary to other works on the subject, match usual translations on grid graphs.


Bibtex
@inproceedings{GrePasViaGri201610,
  author = {Nicolas Grelier and Bastien Pasdeloup and
Jean-Charles Vialatte and Vincent Gripon},
  title = {Neighborhood-Preserving Translations on
Graphs},
  booktitle = {Proceedings of GlobalSIP},
  year = {2016},
  pages = {410--414},
  month = {October},
}

Assembly Output Codes for Learning Neural Networks

P. Tigréat, C. R. K. Lassance, X. Jiang, V. Gripon and C. Berrou, "Assembly Output Codes for Learning Neural Networks," in Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 285--289, September 2016.

Neural network-based classifiers usually encode the class labels of input data via a completely disjoint code, i.e. a binary vector with only one bit associated with each category. We use coding theory to propose assembly codes where each element is associated with several classes, making for better target vectors. These codes emulate the combination of several classifiers, which is a well-known method to improve decision accuracy. Our experiments on data-sets such as MNIST with a multi-layer neural network show that assembly output codes, which are characterized by a higher minimum Hamming distance, result in better classification performance. These codes are also well suited to the use of clustered clique-based networks in category representation.

Download manuscript.

Bibtex
@inproceedings{TigLasJiaGriBer20169,
  author = {Philippe Tigréat and Carlos Rosar Kos
Lassance and Xiaoran Jiang and Vincent Gripon and
Claude Berrou},
  title = {Assembly Output Codes for Learning Neural
Networks},
  booktitle = {Proceedings of the 9th International
Symposium on Turbo Codes and Iterative Information
Processing},
  year = {2016},
  pages = {285--289},
  month = {September},
}

A Turbo-Inspired Iterative Approach for Correspondence Problems of Image Features

A. Aboudib, V. Gripon and G. Coppin, "A Turbo-Inspired Iterative Approach for Correspondence Problems of Image Features," in Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 226--230, September 2016.

Establishing correspondences between image features is a fundamental problem in many computer vision tasks. It is traditionally viewed as a graph matching problem, and solved using an optimization procedure. In this paper, we propose a new approach to solving the correspondence problem from a coding/decoding perspective. We then present an iterative matching algorithm inspired from the turbo-decoding concept. We provide an experimental evaluation of the proposed method, and show that it performs better than state-of-the-art algorithms in the presence of clutter, thanks to turbo-style decoding.

Download manuscript.

Bibtex
@inproceedings{AboGriCop20169,
  author = {Ala Aboudib and Vincent Gripon and Gilles
Coppin},
  title = {A Turbo-Inspired Iterative Approach for
Correspondence Problems of Image Features},
  booktitle = {Proceedings of the 9th International
Symposium on Turbo Codes and Iterative Information
Processing},
  year = {2016},
  pages = {226--230},
  month = {September},
}

Distributed Coding and Synaptic Pruning

E. Coyac, V. Gripon, C. Langlais and C. Berrou, "Distributed Coding and Synaptic Pruning," in Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 206--210, September 2016.

Abstract—This paper deals with the modelization of synaptic pruning in the developing brain, at the informational level. Relying on the hypotheses of clique-based memory and Hebbian learning, we consider several scenarios to try to understand how reliable point-to-point communication may be achieved in the cortex whereas at birth, neurons are connected to a multitude of other neurons, similar to a connection of broadcast type. It is shown that quasi-perfect transfer of information can be obtained in a plausible way using simple rules if a mechanism of synaptic normalization is implemented at the receiver side.

Download manuscript.

Bibtex
@inproceedings{CoyGriLanBer20169,
  author = {Eliott Coyac and Vincent Gripon and
Charlotte Langlais and Claude Berrou},
  title = {Distributed Coding and Synaptic Pruning},
  booktitle = {Proceedings of the 9th International
Symposium on Turbo Codes and Iterative Information
Processing},
  year = {2016},
  pages = {206--210},
  month = {September},
}

Nearest Neighbour Search Using Binary Neural Networks

D. Ferro, V. Gripon and X. Jiang, "Nearest Neighbour Search Using Binary Neural Networks," in Proceedings of IJCNN, pp. 5106--5112, July 2016.

The problem of finding nearest neighbours in terms of Euclidean distance, Hamming distance or other distance metric is a very common operation in computer vision and pattern recognition. In order to accelerate the search for the nearest neighbour in large collection datasets, many methods rely on the coarse-fine approach. In this paper we propose to combine Product Quantization (PQ) and binary neural associative memories to perform the coarse search. Our motivation lies in the fact that neural network dimensions of the representation associated with a set of k vectors is independent of k. We run experiments on TEXMEX SIFT1M and MNIST databases and observe significant improvements in terms of complexity of the search compared to raw PQ.

Download manuscript.

Bibtex
@inproceedings{FerGriJia201607,
  author = {Demetrio Ferro and Vincent Gripon and
Xiaoran Jiang},
  title = {Nearest Neighbour Search Using Binary
Neural Networks},
  booktitle = {Proceedings of IJCNN},
  year = {2016},
  pages = {5106--5112},
  month = {July},
}

Impact du bruit synaptique sur les performances des réseaux de cliques neurales

E. Coyac, V. Gripon and C. Langlais, "Impact du bruit synaptique sur les performances des réseaux de cliques neurales," in Proceedings of the GRETSI conference, 2015.

Artificial neural networks are inspired by biological neural networks present in the brain, and biological plausibility is often used as an argument to validate or criticize a neural network proposal. However, the brain is a system with a lot of interferences and the behaviour of neural networks with respect to this noise has not often been studied. This paper introduces a model to represent noise inside the brain, and studies how neural clique networks respond to that noise. It is shown that the noise can improve the neural clique network performance by avoiding local minima. We also show the impact of this noise on the widely-known Hopfield networks.

Download manuscript.

Bibtex
@inproceedings{CoyGriLan2015,
  author = {Eliott Coyac and Vincent Gripon and
Charlotte Langlais},
  title = {Impact du bruit synaptique sur les
performances des réseaux de cliques neurales},
  booktitle = {Proceedings of the GRETSI conference},
  year = {2015},
}

Réseaux de Clusters de Neurones Restreints

R. Danilo, V. Gripon, P. Coussy and L. Conde-Canencia, "Réseaux de Clusters de Neurones Restreints," in Proceedings of the GRETSI conference, 2015.

Associative memories are capable of retrieving a message previously stored when an incomplete version of this message is presented. A model of associative memory based on binary neurons and binary connections, named Clustered Neural Network, has been recently introduce. The performance of this model drops when the stored message distribution is non-uniform. The goal of this paper, is to propose a new model of associative memory inspired by Clustered Neural Network and Restricted Boltzmann Machine, in order to decrease the vulnerability to non-uniform distribution.In addition, a fully parallel hardware design of the model. The proposed implementation multiplies the number of stored messages by a factor of 3 with an increase of complexity of 40%.

Download manuscript.

Bibtex
@inproceedings{DanGriCouCon2015,
  author = {Robin Danilo and Vincent Gripon and
Philippe Coussy and Laura Conde-Canencia},
  title = {Réseaux de Clusters de Neurones
Restreints},
  booktitle = {Proceedings of the GRETSI conference},
  year = {2015},
}

Vers une caractérisation de la courbe d'incertitude pour des graphes portant des signaux

B. Pasdeloup, V. Gripon, G. Mercier and D. Pastor, "Vers une caractérisation de la courbe d'incertitude pour des graphes portant des signaux," in Proceedings of the GRETSI conference, 2015.

Signal processing on graphs is a recent research domain that aims at generalizing classical tools in signal processing, in order to analyze signals evolving on complex domains. Such domains are represented by graphs, for which one can compute a particular matrix, called the normalized Laplacian [3]. It was shown that the eigenvalues of this Laplacian correspond to the frequencies of the Fourier domain in classical signal processing [2]. Therefore, the frequential domain is not the same for every support graph. A consequence of this is that there is no non-trivial generalization of Heisenberg’s uncertainty principle, that states that a signal cannot be localized both in the time domain and in the frequency domain. A way to generalize this principle, introduced by Agaskar & Lu in [1], consists in determining a curve that represents a lower bound on the compromise between precision in the graph domain and precision in the spectral domain. The aim of this paper is to propose a characterization of the signals achieving this curve, for a larger class of graphs than the one studied by Agaskar & Lu.

Download manuscript.

Bibtex
@inproceedings{PasGriMerPas2015,
  author = {Bastien Pasdeloup and Vincent Gripon and
Grégoire Mercier and Dominique Pastor},
  title = {Vers une caractérisation de la courbe
d'incertitude pour des graphes portant des signaux},
  booktitle = {Proceedings of the GRETSI conference},
  year = {2015},
}

SimNet: A new algorithm for measuring brain networks similarity

A. Mheich, M. Hassan, F. Wendling, M. Khalil, O. Dufor, V. Gripon and C. Berrou, "SimNet: A new algorithm for measuring brain networks similarity," in Proceedings of the ICABME international conference, pp. 119--122, 2015.

Measuring similarity among graphs is recognized as a non-trivial problem. Most of the algorithms proposed so far ignore the spatial location of vertices, which is a crucial factor in the context of brain networks. In this paper, we present a novel algorithm, called “SimNet”, for measuring the similarity between two graphs whose vertices represent the position of sources over the cortex. The novelty is to account for differences at the level of spatially-registered vertices and edges. Simulated graphs are used to evaluate the algorithm performance and to compare it with methods reported elsewhere. Results show that SimNet is able to quantify the similarity between two graphs under a spatial constraint based on the 3D location of edges. The application of SimNet on real data (dense EEG) reveals the presence of spatially-different brain networks modules activating during cognitive activity.

Download manuscript.

Bibtex
@inproceedings{MheHasWenKhaDufGriBer2015,
  author = {A. Mheich and M. Hassan and F. Wendling
and M. Khalil and O. Dufor and V. Gripon and C.
Berrou},
  title = {SimNet: A new algorithm for measuring brain
networks similarity},
  booktitle = {Proceedings of the ICABME international
conference},
  year = {2015},
  pages = {119--122},
}

Graph Reconstruction from the Observation of Diffused Signals

B. Pasdeloup, M. Rabbat, V. Gripon, D. Pastor and G. Mercier, "Graph Reconstruction from the Observation of Diffused Signals," in Proceedings of the 53rd Allerton Conference, pp. 1386--1390, October 2015.

Signal processing on graphs has received a lot of attention in the recent years. A lot of techniques have arised, inspired by classical signal processing ones, to allow studying signals on any kind of graph. A common aspect of these technique is that they require a graph correctly modeling the studied support to explain the signals that are observed on it. However, in many cases, such a graph is unavailable or has no real physical existence. An example of this latter case is a set of sensors randomly thrown in a field which obviously observe related information. To study such signals, there is no intuitive choice for a support graph. In this document, we address the problem of inferring a graph structure from the observation of signals, under the assumption that they were issued of the diffusion of initially i.i.d. signals. To validate our approach, we design an experimental protocol, in which we diffuse signals on a known graph. Then, we forget the graph, and show that we are able to retrieve it very precisely from the only knowledge of the diffused signals.

Download manuscript.

Bibtex
@inproceedings{PasRabGriPasMer201510,
  author = {Bastien Pasdeloup and Michael Rabbat and
Vincent Gripon and Dominique Pastor and Gregoire
Mercier},
  title = {Graph Reconstruction from the Observation
of Diffused Signals},
  booktitle = {Proceedings of the 53rd Allerton
Conference},
  year = {2015},
  pages = {1386--1390},
  month = {October},
}

Restricted Clustered Neural Network for Storing Real Data

R. Danilo, V. Gripon, P. Coussy, L. Conde-Canencia and W. J. Gross, "Restricted Clustered Neural Network for Storing Real Data," in proceedings of GLSVLSI conference, pp. 205--210, May 2015.

Associative memories are an alternative to classical indexed memories that are capable of retrieving a message previously stored when an incomplete version of this message is presented. Recently a new model of associative memory based on binary neurons and binary links has been proposed. This model named Clustered Neural Network (CNN) offers large storage diversity (number of messages stored) and fast message retrieval when implemented in hardware. The performance of this model drops when the stored message distribution is non-uniform. In this paper, we enhance the CNN model to support non-uniform message distribution by adding features of Restricted Boltzmann Machines. In addition, we present a fully parallel hardware design of the model. The proposed implementation multiplies the performance (diversity) of Clustered Neural Networks by a factor of 3 with an increase of complexity of 40%.

Download manuscript.

Bibtex
@inproceedings{DanGriCouConGro20155,
  author = {Robin Danilo and Vincent Gripon and
Philippe Coussy and Laura Conde-Canencia and Warren J.
Gross},
  title = {Restricted Clustered Neural Network for
Storing Real Data},
  booktitle = {proceedings of GLSVLSI conference},
  year = {2015},
  pages = {205--210},
  month = {May},
}

Algorithm and Implementation of an Associative Memory for Oriented Edge Detection Using Improved Clustered Neural Networks

R. Danilo, H. Jarollahi, V. Gripon, P. Coussy, L. Conde-Canencia and W. J. Gross, "Algorithm and Implementation of an Associative Memory for Oriented Edge Detection Using Improved Clustered Neural Networks," in Proceedings of ISCAS Conference, pp. 2501--2504, May 2015.

Associative memories are capable of retrieving previously stored patterns given parts of them. This feature makes them good candidates for pattern detection in images. Clustered Neural Networks is a recently-introduced family of associative memories that allows a fast pattern retrieval when implemented in hardware. In this paper, we propose a new pattern retrieval algorithm that results in a dramatically lower error rate compared to that of the conventional approach when used in oriented edge detection process. This function plays an important role in image processing. Furthermore, we present the corresponding hardware architecture and implementation of the new approach in comparison with a conventional architecture in literature, and show that the proposed architecture does not significantly affect hardware complexity.

Download manuscript.

Bibtex
@inproceedings{DanJarGriCouConGro20155,
  author = {Robin Danilo and Homman Jarollahi and
Vincent Gripon and Philippe Coussy and Laura
Conde-Canencia and Warren J. Gross},
  title = {Algorithm and Implementation of an
Associative Memory for Oriented Edge Detection Using
Improved Clustered Neural Networks},
  booktitle = {Proceedings of ISCAS Conference},
  year = {2015},
  pages = {2501--2504},
  month = {May},
}

A Model of Bottom-Up Visual Attention Using Cortical Magnification

A. Aboudib, V. Gripon and G. Coppin, "A Model of Bottom-Up Visual Attention Using Cortical Magnification," in Proceedings of ICASSP, pp. 1493--1497, April 2015.

The focus of visual attention has been argued to play a key role in object recognition. Many computational models of visual attention were proposed to estimate locations of eye fixations driven by bottom-up stimuli. Most of these models rely on pyramids consisting of multiple scaled versions of the visual scene. This design aims at capturing the fact that neural cells in higher visual areas tend to have larger receptive fields (RFs). On the other hand, very few models represent multi-scaling resulting from the eccentricity-dependent RF sizes within each visual layer, also known as the cortical magnification effect. In this paper, we demonstrate that using a cortical-magnification-like mechanism can lead to performant alternatives to pyramidal approaches in the context of attentional modeling. Moreover, we argue that introducing such a mechanism equips the proposed model with additional properties related to overt attention and distance-dependent saliency that are worth exploring.

Download manuscript.

Bibtex
@inproceedings{AboGriCop20154,
  author = {Ala Aboudib and Vincent Gripon and Gilles
Coppin},
  title = {A Model of Bottom-Up Visual Attention Using
Cortical Magnification},
  booktitle = {Proceedings of ICASSP},
  year = {2015},
  pages = {1493--1497},
  month = {April},
}

A novel algorithm for measuring graph similarity: application to brain networks

A. Mheich, M. Hassan, V. Gripon, O. Dufor, M. Khalil, C. Berrou and F. Wendling, "A novel algorithm for measuring graph similarity: application to brain networks," in Proceedings of the IEEE EMBS Neural Engineering Conference, pp. 1068--1071, April 2015.

Measuring the similarity among graphs is a challenging issue in many disciplines including neuroscience. Several algorithms, mainly based on vertices or edges properties, were proposed to address this issue. Most of them ignore the physical location of the vertices, which is a crucial factor in the analysis of brain networks. Indeed, functional brain networks are usually represented as graphs composed of vertices (brain regions) connected by edges (functional connectivity). In this paper, we propose a novel algorithm to measure a similarity between graphs. The novelty of our approach is to account for vertices, edges and spatiality at the same time. The proposed algorithm is evaluated using synthetic graphs. It shows high ability to detect and measure similarity between graphs. An application to real functional brain networks is then described. The algorithm allows for quantification of the inter-subjects variability during a picture naming task.

Download manuscript.

Bibtex
@inproceedings{MheHasGriDufKhaBerWen20154,
  author = {A. Mheich and M. Hassan and V. Gripon and
O. Dufor and M. Khalil and C. Berrou and F. Wendling},
  title = {A novel algorithm for measuring graph
similarity: application to brain networks},
  booktitle = {Proceedings of the IEEE EMBS Neural
Engineering Conference},
  year = {2015},
  pages = {1068--1071},
  month = {April},
}

Implementing Relational-Algebraic Operators for Improving Cognitive Abilities in Networks of Neural Cliques

A. Aboudib, V. Gripon and B. Tessiau, "Implementing Relational-Algebraic Operators for Improving Cognitive Abilities in Networks of Neural Cliques," in Proceedings of Cognitive, pp. 36--41, March 2015.

Associative memories are devices capable of retrieving previously stored messages from parts of their content. They are used in a variety of applications including CPU caches, routers, intrusion detection systems, etc. They are also considered a good model for human memory, motivating the use of neural-based techniques. When it comes to cognition, it is important to provide such devices with the ability to perform complex requests, such as union, intersection, difference, projection and selection. In this paper, we extend a recently introduced associative memory model to perform relational algebra operations. We introduce new algorithms and discuss their performance which provides an insight on how the brain performs some high-level information processing tasks.

Download manuscript.

Bibtex
@inproceedings{AboGriTes20153,
  author = {Ala Aboudib and Vincent Gripon and
Baptiste Tessiau},
  title = {Implementing Relational-Algebraic Operators
for Improving Cognitive Abilities in Networks of
Neural Cliques},
  booktitle = {Proceedings of Cognitive},
  year = {2015},
  pages = {36--41},
  month = {March},
}

Neural Associative Memories as Accelerators for Binary Vector Search

C. Yu, V. Gripon, X. Jiang and H. Jégou, "Neural Associative Memories as Accelerators for Binary Vector Search," in Proceedings of Cognitive, pp. 85--89, March 2015.

Associative memories aim at matching an input noisy vector with a stored one. The matched vector satisfies a minimum distance criterion with respect to the inner metric of the device. This problem of finding nearest neighbors in terms of Euclidean or Hamming distances is a very common operation in machine learning and pattern recognition. However, the inner metrics of associative memories are often misfitted to handle practical scenarios. In this paper, we adapt Willshaw networks in order to use them for accelerating nearest neighbor search with limited impact on accuracy. We provide a theoretical analysis of our method for binary sparse vectors. We also test our method using the MNIST handwritten digits database. Both our analysis for synthetic data and experiments with real-data evidence a significant gain in complexity with negligible loss in performance compared to exhaustive search.

Download manuscript.

Bibtex
@inproceedings{YuGriJiaJé20153,
  author = {Chendi Yu and Vincent Gripon and Xiaoran
Jiang and Hervé Jégou},
  title = {Neural Associative Memories as Accelerators
for Binary Vector Search},
  booktitle = {Proceedings of Cognitive},
  year = {2015},
  pages = {85--89},
  month = {March},
}

Using Tags to Improve Diversity of Sparse Associative Memories

S. Larroque, E. S. Gooya, V. Gripon and D. Pastor, "Using Tags to Improve Diversity of Sparse Associative Memories," in Proceedings of Cognitive, pp. 1--7, March 2015.

Associative memories, a classical model for brain long-term memory, face interferences between old and new memories. Usually, the only remedy is to enlarge the network so as to retain more memories without collisions: this is the network’s size–diversity trade-off. We propose a novel way of representing data in these networks to provide another mean to extend diversity without resizing the network. We show from our analysis and simulations that this method is a viable alternative, which can perfectly fit cases where network’s size is constrained such as neuromorphic FPGA boards implementing associative memories.

Download manuscript.

Bibtex
@inproceedings{LarGooGriPas20153,
  author = {Stephen Larroque and Ehsan Sedgh Gooya and
Vincent Gripon and Dominique Pastor},
  title = {Using Tags to Improve Diversity of Sparse
Associative Memories},
  booktitle = {Proceedings of Cognitive},
  year = {2015},
  pages = {1--7},
  month = {March},
}

Automatic face recognition using SIFT and networks of tagged neural cliques

E. S. Gooya, D. Pastor and V. Gripon, "Automatic face recognition using SIFT and networks of tagged neural cliques," in Proceedings of Cognitive, pp. 57--61, March 2015.

Bearing information by a fully interconnected subgraphs is recently improved in the neural network of cliques. In this paper, a face recognition system is presented using such networks where local descriptors are used to perform feature extraction. In the wide range of possible image descriptors for face recognition, we focus specifically on the Scale Invariant Feature Transform (SIFT). In contrast to standard methods, our proposed method requires no empirically chosen threshold. Moreover, it performs matching between sets of features, in addition to individual feature matching. Thus, we favor joint occurrences of descriptors during the recognition process. We compare our approach to state of the art face recognition systems based on SIFT descriptors. The evaluation is carried out on the Olivetti and Oracle Research Laboratory (ORL) face database, whose diversity is significant for assessing face recognition methods.

Best paper award

Download manuscript.

Bibtex
@inproceedings{GooPasGri20153,
  author = {Ehsan Sedgh Gooya and Dominique Pastor and
Vincent Gripon},
  title = {Automatic face recognition using SIFT and
networks of tagged neural cliques},
  booktitle = {Proceedings of Cognitive},
  year = {2015},
  pages = {57--61},
  month = {March},
}

Sparse Binary Matrices as Efficient Associative Memories

V. Gripon, V. Skachek and M. Rabbat, "Sparse Binary Matrices as Efficient Associative Memories," in Proceedings of the 52nd Allerton conference, pp. 499-504, October 2014.

Associative memories are widely used devices which can be viewed as universal error-correcting decoders. Employing error-correcting code principles in these devices has allowed to greatly enhance their performance. In this paper we reintroduce a neural-based model using the formalism of linear algebra and extend its functionality, originally limited to erasure retrieval, to handle approximate inputs. In order to perform the retrieval, we use an iterative algorithm that provably converges. We then analyze the performance of the associative memory under the assumption of connection independence. We supportour theoretical results with numerical simulations.

Download manuscript.

Bibtex
@inproceedings{GriSkaRab201410,
  author = {Vincent Gripon and Vitaly Skachek and
Michael Rabbat},
  title = {Sparse Binary Matrices as Efficient
Associative Memories},
  booktitle = {Proceedings of the 52nd Allerton
conference},
  year = {2014},
  pages = {499-504},
  month = {October},
}

Algorithm and Architecture for a Multiple-Field Context-Driven Search Engine Using Fully-Parallel Clustered Associative Memories

H. Jarollahi, N. Onizawa, V. Gripon, T. Hanyu and W. J. Gross, "Algorithm and Architecture for a Multiple-Field Context-Driven Search Engine Using Fully-Parallel Clustered Associative Memories," in Proceedings of SiPS, pp. 1--6, October 2014.

In this paper, a context-driven search engine is presented based on a new family of associative memories. It stores only the associations between items from multiple search fields in the form of binary links, and merges repeated field items to reduce the memory requirements. It achieves 13.6× reduction in memory bits and accesses, and 8.6× reduced number of clock cycles in search operation compared to a classical field-based search structure using content-addressable memory. Furthermore, using parallel computational nodes in the proposed search engine, it achieves five orders of magnitude reduced number of clock cycles compared to a CPU-based counterpart running a classical search algorithm in software.

Download manuscript.

Bibtex
@inproceedings{JarOniGriHanGro201410,
  author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Takahiro Hanyu and Warren J.
Gross},
  title = {Algorithm and Architecture for a
Multiple-Field Context-Driven Search Engine Using
Fully-Parallel Clustered Associative Memories},
  booktitle = {Proceedings of SiPS},
  year = {2014},
  pages = {1--6},
  month = {October},
}

Information, Noise, Coding, Modulation: What about the Brain?

C. Berrou, O. Dufor, V. Gripon and X. Jiang, "Information, Noise, Coding, Modulation: What about the Brain?," in Proceedings of the 8th symposium on Turbo Codes and Iterative Information Processing, pp. 167--172, August 2014.

At the microscopic level, the brain is fundamentally a matter of physics and chemistry, as all the components of the universe are. At the macroscopic scale, behavior, psychology and affects are the main dimensions of its life. To convert atoms and molecules into intelligence, some kind of information has to be fixed in the grey matter of the cerebral cortex. The way this "mental information" is materialized and processed is still an enigma, probably the most puzzling problem addressed to science nowadays. At this mesoscopic level of the brain functioning, the concepts to consider are likely the same as those considered in communication and information theory, mainly information, noise, coding and modulation. This paper proposes some ideas that could help understand some features of the brain in an information-processing perspective.

Download manuscript.

Bibtex
@inproceedings{BerDufGriJia20148,
  author = {Claude Berrou and Olivier Dufor and
Vincent Gripon and Xiaoran Jiang},
  title = {Information, Noise, Coding, Modulation:
What about the Brain?},
  booktitle = {Proceedings of the 8th symposium on
Turbo Codes and Iterative Information Processing},
  year = {2014},
  pages = {167--172},
  month = {August},
}

A GPU-based Associative Memory using Sparse Neural Networks

Z. Yao, V. Gripon and M. Rabbat, "A GPU-based Associative Memory using Sparse Neural Networks," in Proceedings of the PCNN-14 conference, pp. 688--692, July 2014.

Associative memories, serving as building blocks for a variety of algorithms, store content in such a way that it can be later retrieved by probing the memory with a small portion of it, rather than with an address as in more traditional memories. Recently, Gripon and Berrou have introduced a novel construction which builds on ideas from the theory of error correcting codes, greatly outperforming the celebrated Hopfield Neural Networks in terms of the number of stored messages per neuron and the number of stored bits per synapse. The work of Gripon and Berrou proposes two retrieval rules, SUM-OF-SUM and SUM-OF-MAX. In this paper, we implement both rules on a general purpose graphical processing unit (GPU). SUM-OF-SUMuses only matrix-vector multiplication and is easily implemented on the GPU, whereas SUM-OF-MAX, which involves non-linear operations, is much less straightforward to fulfill. However, SUM-OF-MAX gives significantly better retrieval error rates. We propose a hybrid scheme tailored for implementation on a GPU which achieves a 880-fold speedup without sacrificing any accuracy.

Download manuscript.

Bibtex
@inproceedings{YaoGriRab20147,
  author = {Zhe Yao and Vincent Gripon and Michael
Rabbat},
  title = {A GPU-based Associative Memory using Sparse
Neural Networks},
  booktitle = {Proceedings of the PCNN-14 conference},
  year = {2014},
  pages = {688--692},
  month = {July},
}

Huffman Coding for Storing Non-uniformly Distributed Messages in Networks of Neural Cliques

B. Boguslawski, V. Gripon, F. Seguin and F. Heitzmann, "Huffman Coding for Storing Non-uniformly Distributed Messages in Networks of Neural Cliques," in proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, volume 1, pp. 262--268, July 2014.

A new family of sparse neural networks achieving nearly optimal performance has been recently introduced. In these networks, messages are stored as cliques in clustered graphs. In this paper, we interpret these networks using the formalism of error correcting codes. To achieve this, we introduce two original codes, the thrifty code and the clique code, that are both sub-families of binary constant weight codes. We also provide the networks with an enhanced retrieving rule that enables a property of answer correctness and that improves performance.

Download manuscript.

Bibtex
@inproceedings{BogGriSegHei20147,
  author = {Bartosz Boguslawski and Vincent Gripon and
Fabrice Seguin and Frédéric Heitzmann},
  title = {Huffman Coding for Storing Non-uniformly
Distributed Messages in Networks of Neural Cliques},
  booktitle = {proceedings of the Twenty-Eighth AAAI
Conference on Artificial Intelligence, volume 1},
  year = {2014},
  pages = {262--268},
  month = {July},
}

A study of retrieval algorithms of sparse messages in networks of neural cliques

A. Aboudib, V. Gripon and X. Jiang, "A study of retrieval algorithms of sparse messages in networks of neural cliques," in Proceedings of Cognitive 2014, pp. 140--146, May 2014.

Associative memories are data structures addressed using part of the content rather than an index. They offer good fault reliability and biological plausibility. Among different families of associative memories, sparse ones are known to offer the best efficiency (ratio of the amount of bits stored to that of bits used by the network itself). Their retrieval process performance has been shown to benefit from the use of iterations. In this paper, we introduce several algorithms to enhance the performance of the retrieval process in recently proposed sparse associative memories based on binary neural networks. We show that these algorithms provide better performance than existing techniques and discuss their biological plausibility. We also analyze the required number of iterations and derive corresponding curves.

Download manuscript.

Bibtex
@inproceedings{AboGriJia20145,
  author = {Ala Aboudib and Vincent Gripon and Xiaoran
Jiang},
  title = {A study of retrieval algorithms of sparse
messages in networks of neural cliques},
  booktitle = {Proceedings of Cognitive 2014},
  year = {2014},
  pages = {140--146},
  month = {May},
}

Towards a Spectral Characterization of Signals Supported on Small-World Networks

M. Rabbat and V. Gripon, "Towards a Spectral Characterization of Signals Supported on Small-World Networks," in ICASSP, pp. 4793--4797, May 2014.

We study properties of the family of small-world random graphs introduced in Watts & Strogatz (1998), focusing on the spectrum of the normalized graph Laplacian. This spectrum influences the extent to which a signal supported on the vertices of the graph can be simultaneously localized on the graph and in the spectral domain (the surrogate of the frequency domain for signals supported on a graph). This characterization has implications for inferring or interpolating functions supported on such graphs when observations are only available at a subset of nodes.

Download manuscript.

Bibtex
@inproceedings{RabGri20145,
  author = {Michael Rabbat and Vincent Gripon},
  title = {Towards a Spectral Characterization of
Signals Supported on Small-World Networks},
  booktitle = {ICASSP},
  year = {2014},
  pages = {4793--4797},
  month = {May},
}

Cluster-based Associative Memories Built From Unreliable Storage

F. Leduc-Primeau, V. Gripon, M. Rabbat and W. Gross, "Cluster-based Associative Memories Built From Unreliable Storage," in ICASSP, pp. 8370--8374, May 2014.

We consider associative memories based on clustered graphs that were recently introduced. These memories are almost optimal in terms of the amount of storage they require (efficiency), and allow retrieving messages with low complexity. We study an unreliable implementation of the memory and compare its error rate and storage efficiency with that of a reliable implementation. We present analytical and simulation results that indicate that the proposed memory structure can tolerate a large number of faults at a reasonable cost, thereby making it a good candidate for achieving highly efficient circuit implementations of associative memories.

Download manuscript.

Bibtex
@inproceedings{LedGriRabGro20145,
  author = {François Leduc-Primeau and Vincent Gripon
and Michael Rabbat and Warren Gross},
  title = {Cluster-based Associative Memories Built
From Unreliable Storage},
  booktitle = {ICASSP},
  year = {2014},
  pages = {8370--8374},
  month = {May},
}

Sparse Structured Associative Memories as Efficient Set-Membership Data Structures

V. Gripon, V. Skachek and M. G. Rabbat, "Sparse Structured Associative Memories as Efficient Set-Membership Data Structures," in Proceedings of the 51st Allerton conference, pp. 500--505, October 2013.

We study the use of sparse structured associative memories as a memory-efficient and computationally-efficient data structure for representing a set of elements when one wishes to perform set-membership queries and some errors (false positives) are tolerable. Associative memories, when viewed as representing a set, enjoy a number of interesting properties, including that set membership queries can be carried out even when the input (query element) is only partially known or is partially corrupted. The associative memories considered here (initially proposed in [Gripon and Berrou, 2011]) encode the set in the edge structure of a graph. In this paper we generalize this construction to encode the set in the edge structure of a hypergraph. We derive bounds on the false positive rates (the probability that the associative memory erroneously declares that an element is in the stored set when it, in fact, was not). Interestingly, the proposed structures enjoy many of the same properties as Bloom filters (e.g., they have zero false negative rate, the time to perform an insert and lookup does not depend on the number of elements stored, and the false positive rate can be reduced by using additional memory for storage), while also offering the properties of associative memories (allowing for queries on partial or corrupted inputs).

Download manuscript.

Bibtex
@inproceedings{GriSkaRab201310,
  author = {Vincent Gripon and Vitaly Skachek and
Michael G. Rabbat},
  title = {Sparse Structured Associative Memories as
Efficient Set-Membership Data Structures},
  booktitle = {Proceedings of the 51st Allerton
conference},
  year = {2013},
  pages = {500--505},
  month = {October},
}

Mémoires associatives pour observations floues

V. Gripon and X. Jiang, "Mémoires associatives pour observations floues," in Proceedings of XXIV-th Gretsi seminar, September 2013.

We introduce an extension to recently proposed associative memories that allow retrieval of previously stored messages given approximate inputs. We derive performance equations and show that, at fixed probability of error, memory efficiency is at a constant factor from optimal and complexity remain low (quadratic). These extensions arise new perspectives in terms of using associative memories in database engines to perform complex operations while keeping low response time.

Download manuscript.

Bibtex
@inproceedings{GriJia20139,
  author = {Vincent Gripon and Xiaoran Jiang},
  title = {Mémoires associatives pour observations
floues},
  booktitle = {Proceedings of XXIV-th Gretsi seminar},
  year = {2013},
  month = {September},
}

Maximum Likelihood Associative Memories

V. Gripon and M. Rabbat, "Maximum Likelihood Associative Memories," in Proceedings of Information Theory Workshop, pp. 1--5, September 2013.

Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amount of memory required to store the same data. Finally, we bound the computational complexity for message retrieval. We then compare these bounds with two existing associative memory architectures: the celebrated Hopfield neural networks and a neural network architecture introduced more recently by Gripon and Berrou.

Download manuscript.

Bibtex
@inproceedings{GriRab20139,
  author = {Vincent Gripon and Michael Rabbat},
  title = {Maximum Likelihood Associative Memories},
  booktitle = {Proceedings of Information Theory
Workshop},
  year = {2013},
  pages = {1--5},
  month = {September},
}

Reconstructing a Graph from Path Traces

V. Gripon and M. Rabbat, "Reconstructing a Graph from Path Traces," in Proceedings of International Symposium on Information Theory, pp. 2488-2492, July 2013.

This paper considers the problem of inferring the structure of a network from indirect observations. Each observation (a “trace”) is the unordered set of nodes which are activated along a path through the network. Since a trace does not convey information about the order of nodes within the path, there are many feasible orders for each trace observed, and thus the problem of inferring the network from traces is, in general, illposed. We propose and analyze an algorithm which inserts edges by ordering each trace into a path according to which pairs of nodes in the path co-occur most frequently in the observations. When all traces involve exactly 3 nodes, we derive necessary and sufficient conditions for the reconstruction algorithm to exactly recover the graph. Finally, for a family of random graphs, we present expressions for reconstruction error probabilities (false discoveries and missed detections).

Download manuscript.

Bibtex
@inproceedings{GriRab20137,
  author = {Vincent Gripon and Michael Rabbat},
  title = {Reconstructing a Graph from Path Traces},
  booktitle = {Proceedings of International Symposium
on Information Theory},
  year = {2013},
  pages = {2488-2492},
  month = {July},
}

A Low-Power Content-Adressable-Memory Based on Clustered-Sparse-Networks

H. Jarollahi, V. Gripon, N. Onizawa and W. J. Gross, "A Low-Power Content-Adressable-Memory Based on Clustered-Sparse-Networks," in Proceedings of 24th International Conference on Application-specific Systems, Architectures and Processors, pp. 642--653, June 2013.

A low-power Content-Addressable-Memory (CAM) is introduced employing a new mechanism for associativity between the input tags and the corresponding address of the output data. The proposed architecture is based on a recently developed clustered-sparse-network using binary-weighted connections that on-average will eliminate most of the parallel comparisons performed during a search. Therefore, the dynamic energy consumption of the proposed design is significantly lower compared to that of a conventional low-power CAM design. Given an input tag, the proposed architecture computes a few possibilities for the location of the matched tag and performs the comparisons on them to locate a single valid match. A 0.13 um CMOS technology was used for simulation purposes. The energy consumption and the search delay of the proposed design are 9.5%, and 30.4% of that of the conventional NAND architecture respectively with a 3.4% higher number of transistors.

Download manuscript.

Bibtex
@inproceedings{JarGriOniGro20136,
  author = {Hooman Jarollahi and Vincent Gripon and
Naoya Onizawa and Warren J. Gross},
  title = {A Low-Power Content-Adressable-Memory Based
on Clustered-Sparse-Networks},
  booktitle = {Proceedings of 24th International
Conference on Application-specific Systems,
Architectures and Processors},
  year = {2013},
  pages = {642--653},
  month = {June},
}

Reduced-complexity binary-weight-coded associative memories

H. Jarollahi, N. Onizawa, V. Gripon and W. J. Gross, "Reduced-complexity binary-weight-coded associative memories," in Proceedings of International Conference on Acoustics, Speech, and Signal Processing, pp. 2523--2527, May 2013.

Associative memories retrieve stored information given partial or erroneous input patterns. Recently, a new family of associative memories based on Clustered-Neural-Networks (CNNs) was introduced that can store many more messages than classical Hopfield-Neural Networks (HNNs). In this paper, we propose hardware architectures of such memories for partial or erroneous inputs. The proposed architectures eliminate winner-take-all modules and thus reduce the hardware complexity by consuming 65% fewer FPGA lookup tables and increase the operating frequency by approximately 1.9 times compared to that of previous work.

Download manuscript.

Bibtex
@inproceedings{JarOniGriGro20135,
  author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Warren J. Gross},
  title = {Reduced-complexity binary-weight-coded
associative memories},
  booktitle = {Proceedings of International Conference
on Acoustics, Speech, and Signal Processing},
  year = {2013},
  pages = {2523--2527},
  month = {May},
}

Compressing multisets using tries

V. Gripon, M. Rabbat, V. Skachek and W. J. Gross, "Compressing multisets using tries," in Proceedings of Information Theory Workshop, Lausanne, Switzerland, pp. 647--651, September 2012.

We consider the problem of efficient and lossless representation of a multiset of m words drawn with repetition from a set of size 2^n . One expects that encoding the (unordered) multiset should lead to significant savings in rate as compared to encoding an (ordered) sequence with the same words, since information about the order of words in the sequence corresponds to a permutation. We propose and analyze a practical multiset encoder/decoder based on the trie data structure. The act of encoding requires O(m(n + log m)) operations, and decoding requires O(mn) operations. Of particular interest is the case where cardinality of the multiset scales as m = 2^n/c for some c > 1, as n → ∞. Under this scaling, and when the words in the multiset are drawn independently and uniformly, we show that the proposed encoding leads to an arbitrary improvement in rate over encoding an ordered sequence with the same words. Moreover, the expected length of the proposed codes in this setting is asymptotically within a constant factor of 5/3 of the lower bound.

Download manuscript.
Download presentation support.

Bibtex
@inproceedings{GriRabSkaGro20129,
  author = {Vincent Gripon and Michael Rabbat and
Vitaly Skachek and Warren J. Gross},
  title = {Compressing multisets using tries},
  booktitle = {Proceedings of Information Theory
Workshop},
  year = {2012},
  address = {Lausanne, Switzerland},
  month = {September},
  pages = {647--651},
}

Random clique codes

V. Gripon, V. Skachek, W. J. Gross and M. Rabbat, "Random clique codes," in Proceedings of 7" International Symposium on Turbo Codes and Iterative Information Processing, Gothenburg, Sweden, pp. 121--125, August 2012.

A new family of associative memories based on sparse neural networks has been recently introduced. These memories achieve excellent performance thanks to the use of error-correcting coding principles. Based on these devices, we introduce a new family of codes termed clique codes. These codes are based on the cliques in balanced c-partite graphs describing associative memories. In particular, we study an ensemble of random clique codes, and prove that such ensemble contains asymptotically good codes. Furthermore, these codes can be efficiently decoded using the neural network-based associative memories with limited complexity and memory consumption. They offer a new interesting alternative to existing codes, in particular when the underlying channel is assumed to be a memoryless erasure channel.

Download manuscript.
Download presentation support.

Bibtex
@inproceedings{GriSkaGroRab20128,
  author = {Vincent Gripon and Vitaly Skachek and
Warren J. Gross and Michael Rabbat},
  title = {Random clique codes},
  booktitle = {Proceedings of 7" International
Symposium on Turbo Codes and Iterative Information
Processing},
  year = {2012},
  address = {Gothenburg, Sweden},
  month = {August},
  pages = {121--125},
}

Learning long sequences in binary neural networks

X. Jiang, V. Gripon and C. Berrou, "Learning long sequences in binary neural networks," in Proceedings of Cognitive 2012, Nice, France, pp. 165--170, July 2012.

An original architecture of oriented sparse neural networks that enables the introduction of sequentiality in associative memories is proposed in this paper. This architecture can be regarded as a generalization of a recently proposed non oriented binary network based on cliques. Using a limited neuron resource, the network is able to learn very long sequences and to retrieve them only from the knowledge of some consecutive symbols.

Download manuscript.

Bibtex
@inproceedings{JiaGriBer20127,
  author = {Xiaoran Jiang and Vincent Gripon and
Claude Berrou},
  title = {Learning long sequences in binary neural
networks},
  booktitle = {Proceedings of Cognitive 2012},
  year = {2012},
  address = {Nice, France},
  pages = {165--170},
  month = {July},
}

Architecture and Implementation of an Associative Memory Using Sparse Clustered Networks

H. Jarollahi, N. Onizawa, V. Gripon and W. J. Gross, "Architecture and Implementation of an Associative Memory Using Sparse Clustered Networks," in Proceedings of IEEE International Symposium on Circuits and Systems, pp. 2901-2904, May 2012.

Associative memories are alternatives to indexed memories that when implemented in hardware can benefit many applications such as data mining. The classical neural network based methodology is impractical to implement since in order to increase the size of the memory, the number of information bits stored per memory bit (efficiency) approaches zero. In addition, the length of a message to be stored and retrieved needs to be the same size as the number of nodes in the network causing the total number of messages the network is capable of storing (diversity) to be limited. Recently, a novel algorithm based on sparse clustered neural networks has been proposed that achieves nearly optimal efficiency and large diversity. In this paper, a proof-of-concept hardware implementation of these networks is presented. The limitations and possible future research areas are discussed.

Download manuscript.
Download presentation support.

Bibtex
@inproceedings{JarOniGriGro20125,
  author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Warren J. Gross},
  title = {Architecture and Implementation of an
Associative Memory Using Sparse Clustered Networks},
  booktitle = {Proceedings of IEEE International
Symposium on Circuits and Systems},
  month = {May},
  year = {2012},
  pages = {2901-2904},
}

Nearly-optimal associative memories based on distributed constant weight codes

V. Gripon and C. Berrou, "Nearly-optimal associative memories based on distributed constant weight codes," in Proceedings of Information Theory and Applications Workshop, San Diego, CA, USA, pp. 269--273, February 2012.

A new family of sparse neural networks achieving nearly optimal performance has been recently introduced. In these networks, messages are stored as cliques in clustered graphs. In this paper, we interpret these networks using the formalism of error correcting codes. To achieve this, we introduce two original codes, the thrifty code and the clique code, that are both sub-families of binary constant weight codes. We also provide the networks with an enhanced retrieving rule that enables a property of answer correctness and that improves performance.

I thank Pascal Vontobel for pointing out several mistakes in the article (this version is updated).

Download manuscript.
Download presentation support.

Bibtex
@inproceedings{GriBer20122,
  author = {Vincent Gripon and Claude Berrou},
  title = {Nearly-optimal associative memories based
on distributed constant weight codes},
  booktitle = {Proceedings of Information Theory and
Applications Workshop},
  year = {2012},
  address = {San Diego, CA, USA},
  pages = {269--273},
  month = {February},
}

A simple and efficient way to store many messages using neural cliques

V. Gripon and C. Berrou, "A simple and efficient way to store many messages using neural cliques," in Proceedings of IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain, Paris, France, pp. 54--58, April 2011.

Associative memories are devices that are able to learn messages and to retrieve them in presence of errors or erasures. Their mecanics is similar to the one of error decoders. However, the role of correlation is opposed in the two devices, used as the essence of the retrieval process in the first one and avoided in the second one. Original codes are introduced in this paper to allow the effective combination of the two domains. The main idea is to associate with each message to learn a clique in a binary neural network. The obtained performance is dramatically better than the one corresponding to the state of the art, for instance Hopfield Neural Networks. Moreover, the model proposed is biologically plausible: it uses binary sparse connections between clusters of neurons provided with only two operations: sum and selection of maximum.

Download manuscript.
Download presentation support.

Bibtex
@inproceedings{GriBer20114,
  author = {Vincent Gripon and Claude Berrou},
  title = {A simple and efficient way to store many
messages using neural cliques},
  booktitle = {Proceedings of IEEE Symposium on
Computational Intelligence, Cognitive Algorithms,
Mind, and Brain},
  year = {2011},
  pages = {54--58},
  address = {Paris, France},
  month = {April},
}

Coded Hopfield networks

C. Berrou and V. Gripon, "Coded Hopfield networks," in Proceedings of 6" International Symposium on Turbo Codes and Iterative Information Processing, Brest, France, pp. 1--5, September 2010.

Error-correcting coding is introduced in associative memories based on Hopfield networks in order to increase the learning diversity as well as the recall robustness in presence of erasures and errors. To achieve this, the graph associated with the classical Hopfield network is transformed into a bipartite graph in which incoming information is linked to orthogonal or quasi-orthogonal codes. Whereas learning is similar to that of classical (i.e. Hebbian) Hopfield networks, memory retrieval relies on error correction decoding which offers strong discrimination properties between the memorized patterns.

Download manuscript.

Bibtex
@inproceedings{BerGri20109,
  author = {Claude Berrou and Vincent Gripon},
  title = {Coded Hopfield networks},
  booktitle = {Proceedings of 6" International
Symposium on Turbo Codes and Iterative Information
Processing},
  year = {2010},
  pages = {1--5},
  address = {Brest, France},
  month = {September},
}

Qualitative Concurrent Stochastic Games with Imperfect Information

V. Gripon and O. Serre, "Qualitative Concurrent Stochastic Games with Imperfect Information," in Proceedings of 36th International Colloquium of Automata, Languages and Programming, Springer, Lecture Notes in Computer Science, Rhodes, Greece, pp. 200--211, July 2009.

We study a model of games that combines concurrency, imperfect information and stochastic aspects. Those are finite states games in which, at each round, the two players choose, simultaneously and independently, an action. Then a successor state is chosen accordingly to some fixed probability distribution depending on the previous state and on the pair of actions chosen by the players. Imperfect information is modeled as follows: both players have an equivalence relation over states and, instead of observing the exact state, they only know to which equivalence class it belongs. Therefore, if two partial plays are indistinguishable by some player, he should behave the same in both of them. We consider reachability (does the play eventually visit a final state?) and Buchi objective (does the play visit infinitely often a final state?). Our main contribution is to prove that the following problem is complete for 2-ExpTime: decide whether the first player has a strategy that ensures her to almost-surely win against any possible strategy of her oponent. We also characterise those strategies needed by the first player to almost-surely win.

Download manuscript.

Bibtex
@inproceedings{GriSer20097,
  author = {Vincent Gripon and Olivier Serre},
  title = {Qualitative Concurrent Stochastic Games
with Imperfect Information},
  booktitle = {Proceedings of 36th International
Colloquium of Automata, Languages and Programming},
  year = {2009},
  editor = {Springer},
  pages = {200--211},
  series = {Lecture Notes in Computer Science},
  address = {Rhodes, Greece},
  month = {July},
}




You are the 495878th visitor

Vincent Gripon's Homepage