Site de Vincent Gripon

Blog sur mes recherches et mon enseignement

Articles en conferences

2017

J. Vialatte, V. Gripon et G. Coppin, "Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs," dans Proceedings of GlobalSip, 2017. Manuscrit.

M. Ménoret, N. Farrugia, B. Pasdeloup et V. Gripon, "Evaluating Graph Signal Processing for Neuroimaging Through Classification and Dimensionality Reduction," dans Proceedings of GlobalSip, 2017. À paraître. Manuscrit.

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel et M. Jezequel, "Incremental Learning on Chip," dans Proceedings of GlobalSip, 2017. À paraître. Manuscrit.

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel et M. Jezequel, "Budget Restricted Incremental Learning with Pre-Trained Convolutional Neural Networks and Binary Associative Memories," dans Proceedings of SIPS, 2017. À paraître. Manuscrit.

V. Gripon, "Tropical Graph Signal Processing," dans Proceedings of the Asilomar conference, octobre 2017. À paraître. Manuscrit.

T. Stérin, N. Farrugia et V. Gripon, "An Intrinsic Difference Between Vanilla RNNs and GRU Models," dans Proceedings of Cognitive, pp. 76--81, février 2017. Manuscrit.

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel et M. Jezequel, "Finding All Matches in a Database using Binary Neural Networks," dans Proceedings of Cognitive, pp. 59--64, février 2017. Manuscrit.

E. Coyac, V. Gripon, C. Langlais et C. Berrou, "Performance of Neural Clique Networks Subject to Synaptic Noise," dans Proceedings of Cognitive, pp. 4--9, février 2017. Manuscrit.

2016

N. Grelier, B. Pasdeloup, J. Vialatte et V. Gripon, "Neighborhood-Preserving Translations on Graphs," dans Proceedings of GlobalSIP, pp. 410--414, octobre 2016.

P. Tigréat, C. R. K. Lassance, X. Jiang, V. Gripon et C. Berrou, "Assembly Output Codes for Learning Neural Networks," dans Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 285--289, septembre 2016. Manuscrit.

A. Aboudib, V. Gripon et G. Coppin, "A Turbo-Inspired Iterative Approach for Correspondence Problems of Image Features," dans Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 226--230, septembre 2016. Manuscrit.

E. Coyac, V. Gripon, C. Langlais et C. Berrou, "Distributed Coding and Synaptic Pruning," dans Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 206--210, septembre 2016. Manuscrit.

D. Ferro, V. Gripon et X. Jiang, "Nearest Neighbour Search Using Binary Neural Networks," dans Proceedings of IJCNN, pp. 5106--5112, juillet 2016. Manuscrit.

2015

E. Coyac, V. Gripon et C. Langlais, "Impact du bruit synaptique sur les performances des réseaux de cliques neurales," dans Actes de la conférence GRETSI, 2015. Manuscrit.

R. Danilo, V. Gripon, P. Coussy et L. Conde-Canencia, "Réseaux de Clusters de Neurones Restreints," dans Actes de la conférence GRETSI, 2015. Manuscrit.

B. Pasdeloup, V. Gripon, G. Mercier et D. Pastor, "Vers une caractérisation de la courbe d'incertitude pour des graphes portant des signaux," dans Actes de la conférence GRETSI, 2015. Manuscrit.

A. Mheich, M. Hassan, F. Wendling, M. Khalil, O. Dufor, V. Gripon et C. Berrou, "SimNet: A new algorithm for measuring brain networks similarity," dans Proceedings of the ICABME international conference, pp. 119--122, 2015. Manuscrit.

B. Pasdeloup, M. Rabbat, V. Gripon, D. Pastor et G. Mercier, "Graph Reconstruction from the Observation of Diffused Signals," dans Proceedings of the 53rd Allerton Conference, pp. 1386--1390, octobre 2015. Manuscrit.

R. Danilo, V. Gripon, P. Coussy, L. Conde-Canencia et W. J. Gross, "Restricted Clustered Neural Network for Storing Real Data," dans proceedings of GLSVLSI conference, pp. 205--210, mai 2015. Manuscrit.

R. Danilo, H. Jarollahi, V. Gripon, P. Coussy, L. Conde-Canencia et W. J. Gross, "Algorithm and Implementation of an Associative Memory for Oriented Edge Detection Using Improved Clustered Neural Networks," dans Proceedings of ISCAS Conference, pp. 2501--2504, mai 2015. Manuscrit.

A. Aboudib, V. Gripon et G. Coppin, "A Model of Bottom-Up Visual Attention Using Cortical Magnification," dans Proceedings of ICASSP, pp. 1493--1497, avril 2015. Manuscrit.

A. Mheich, M. Hassan, V. Gripon, O. Dufor, M. Khalil, C. Berrou et F. Wendling, "A novel algorithm for measuring graph similarity: application to brain networks," dans Proceedings of the IEEE EMBS Neural Engineering Conference, pp. 1068--1071, avril 2015. Manuscrit.

A. Aboudib, V. Gripon et B. Tessiau, "Implementing Relational-Algebraic Operators for Improving Cognitive Abilities in Networks of Neural Cliques," dans Proceedings of Cognitive, pp. 36--41, mars 2015. Manuscrit.

C. Yu, V. Gripon, X. Jiang et H. Jégou, "Neural Associative Memories as Accelerators for Binary Vector Search," dans Proceedings of Cognitive, pp. 85--89, mars 2015. Manuscrit.

S. Larroque, E. S. Gooya, V. Gripon et D. Pastor, "Using Tags to Improve Diversity of Sparse Associative Memories," dans Proceedings of Cognitive, pp. 1--7, mars 2015. Manuscrit.

E. S. Gooya, D. Pastor et V. Gripon, "Automatic face recognition using SIFT and networks of tagged neural cliques," dans Proceedings of Cognitive, pp. 57--61, mars 2015. Manuscrit.

2014

V. Gripon, V. Skachek et M. Rabbat, "Sparse Binary Matrices as Efficient Associative Memories," dans Proceedings of the 52nd Allerton conference, pp. 499-504, octobre 2014. Manuscrit.

H. Jarollahi, N. Onizawa, V. Gripon, T. Hanyu et W. J. Gross, "Algorithm and Architecture for a Multiple-Field Context-Driven Search Engine Using Fully-Parallel Clustered Associative Memories," dans Proceedings of SiPS, pp. 1--6, octobre 2014. Manuscrit.

C. Berrou, O. Dufor, V. Gripon et X. Jiang, "Information, Noise, Coding, Modulation: What about the Brain?," dans Proceedings of the 8th symposium on Turbo Codes and Iterative Information Processing, pp. 167--172, août 2014. Manuscrit.

Z. Yao, V. Gripon et M. Rabbat, "A GPU-based Associative Memory using Sparse Neural Networks," dans Proceedings of the PCNN-14 conference, pp. 688--692, juillet 2014. Manuscrit.

B. Boguslawski, V. Gripon, F. Seguin et F. Heitzmann, "Huffman Coding for Storing Non-uniformly Distributed Messages in Networks of Neural Cliques," dans proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, volume 1, pp. 262--268, juillet 2014. Manuscrit.

A. Aboudib, V. Gripon et X. Jiang, "A study of retrieval algorithms of sparse messages in networks of neural cliques," dans Proceedings of Cognitive 2014, pp. 140--146, mai 2014. Manuscrit.

M. Rabbat et V. Gripon, "Towards a Spectral Characterization of Signals Supported on Small-World Networks," dans ICASSP, pp. 4793--4797, mai 2014. Manuscrit.

F. Leduc-Primeau, V. Gripon, M. Rabbat et W. Gross, "Cluster-based Associative Memories Built From Unreliable Storage," dans ICASSP, pp. 8370--8374, mai 2014. Manuscrit.

2013

V. Gripon, V. Skachek et M. G. Rabbat, "Sparse Structured Associative Memories as Efficient Set-Membership Data Structures," dans Actes de la 51ème conférence Allerton, pp. 500--505, octobre 2013. Manuscrit.

V. Gripon et X. Jiang, "Mémoires associatives pour observations floues," dans Actes du XXIVème colloque Gretsi, septembre 2013. Manuscrit.

V. Gripon et M. Rabbat, "Maximum Likelihood Associative Memories," dans Proceedings of Information Theory Workshop, pp. 1--5, septembre 2013. Manuscrit.

V. Gripon et M. Rabbat, "Reconstructing a Graph from Path Traces," dans Proceedings of International Symposium on Information Theory, pp. 2488-2492, juillet 2013. Manuscrit.

H. Jarollahi, V. Gripon, N. Onizawa et W. J. Gross, "A Low-Power Content-Adressable-Memory Based on Clustered-Sparse-Networks," dans Proceedings of 24th International Conference on Application-specific Systems, Architectures and Processors, pp. 642--653, juin 2013. Manuscrit.

H. Jarollahi, N. Onizawa, V. Gripon et W. J. Gross, "Reduced-complexity binary-weight-coded associative memories," dans Proceedings of International Conference on Acoustics, Speech, and Signal Processing, pp. 2523--2527, mai 2013. Manuscrit.

2012

V. Gripon, M. Rabbat, V. Skachek et W. J. Gross, "Compressing multisets using tries," dans Proceedings of Information Theory Workshop, Lausanne, Suisse, pp. 647--651, septembre 2012. Manuscrit. Présentation.

V. Gripon, V. Skachek, W. J. Gross et M. Rabbat, "Random clique codes," dans Proceedings of 7" International Symposium on Turbo Codes and Iterative Information Processing, Gothenburg, Suède, pp. 121--125, août 2012. Manuscrit. Présentation.

X. Jiang, V. Gripon et C. Berrou, "Learning long sequences in binary neural networks," dans Proceedings of Cognitive 2012, Nice, France, pp. 165--170, juillet 2012. Manuscrit.

H. Jarollahi, N. Onizawa, V. Gripon et W. J. Gross, "Architecture and Implementation of an Associative Memory Using Sparse Clustered Networks," dans Proceedings of IEEE International Symposium on Circuits and Systems, pp. 2901-2904, mai 2012. Manuscrit. Présentation.

V. Gripon et C. Berrou, "Nearly-optimal associative memories based on distributed constant weight codes," dans Proceedings of Information Theory and Applications Workshop, San Diego, CA, USA, pp. 269--273, février 2012. Manuscrit. Présentation.

2011

V. Gripon et C. Berrou, "A simple and efficient way to store many messages using neural cliques," dans Proceedings of IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain, Paris, France, pp. 54--58, avril 2011. Manuscrit. Présentation.

2010

C. Berrou et V. Gripon, "Coded Hopfield networks," dans Proceedings of 6" International Symposium on Turbo Codes and Iterative Information Processing, Brest, France, pp. 1--5, septembre 2010. Manuscrit.

2009

V. Gripon et O. Serre, "Qualitative Concurrent Stochastic Games with Imperfect Information," dans Proceedings of 36th International Colloquium of Automata, Languages and Programming, Springer, Lecture Notes in Computer Science, Rhodes, Grèce, pp. 200--211, juillet 2009. Manuscrit.

Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs

J. Vialatte, V. Gripon et G. Coppin, "Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs," dans Proceedings of GlobalSip, 2017.

We propose a generic layer formulation that extends the properties of convolutional layers to any domain that can be described by a graph topology. Namely, we use the support of its adjacency matrix to design learnable weight sharing filters able to exploit the underlying structure of signals in the same fashion as for images. The proposed formulation makes it possible to learn the weights of the filter as well as a scheme that controls how they are shared across the graph. We perform validation experiments with image datasets and show that these filters offer performances comparable with convolutional ones.

Télécharger le manuscrit.

Bibtex
@inproceedings{ViaGriCop2017,
  author = {Jean-Charles Vialatte and Vincent Gripon
and Gilles Coppin},
  title = {Learning Local Receptive Fields and their
Weight Sharing Scheme on Graphs},
  booktitle = {Proceedings of GlobalSip},
  year = {2017},
  key = {To appear},
}

Evaluating Graph Signal Processing for Neuroimaging Through Classification and Dimensionality Reduction

M. Ménoret, N. Farrugia, B. Pasdeloup et V. Gripon, "Evaluating Graph Signal Processing for Neuroimaging Through Classification and Dimensionality Reduction," dans Proceedings of GlobalSip, 2017. À paraître.

Graph Signal Processing (GSP) is a promising framework to analyze multi-dimensional neuroimaging datasets, while taking into account both the spatial and functional dependencies between brain signals. In the present work, we apply dimensionality reduction techniques based on graph representations of the brain to decode brain activity from real and simulated fMRI datasets. We introduce seven graphs obtained from a) geometric structure and/or b) functional connectivity between brain areas at rest, and compare them when performing dimension reduction for classification. We show that mixed graphs using both a) and b) offer the best performance. We also show that graph sampling methods perform better than classical dimension reduction including Principal Component Analysis (PCA) and Independent Component Analysis (ICA).

Télécharger le manuscrit.

Bibtex
@inproceedings{MéFarPasGri2017,
  author = {Mathilde Ménoret and Nicolas Farrugia and
Bastien Pasdeloup and Vincent Gripon},
  title = {Evaluating Graph Signal Processing for
Neuroimaging Through Classification and Dimensionality
Reduction},
  booktitle = {Proceedings of GlobalSip},
  year = {2017},
  note = {To appear},
}

Incremental Learning on Chip

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel et M. Jezequel, "Incremental Learning on Chip," dans Proceedings of GlobalSip, 2017. À paraître.

Learning on chip (LOC) is a challenging problem in which an embedded system learns a model and uses it to process and classify unknown data, while adapting to new observations or classes. It may require intensive computational power to adapt to new data, leading to a complex hardware implementation. We address this issue by introducing an incremental learning method based on the combination of a pre-trained Convolutional Neural Network (CNN) and majority votes, using Product Quantizing (PQ) as a bridge between them. We detail a hardware implementation of the proposed method (validated on a FPGA target) using limited hardware resources while providing substantial processing acceleration compared to a CPU counterpart.

Télécharger le manuscrit.

Bibtex
@inproceedings{HacGriFarArzJez2017,
  author = {Ghouthi Boukli Hacene and Vincent Gripon
and Nicolas Farrugia and Matthieu Arzel and Michel
Jezequel},
  title = {Incremental Learning on Chip},
  booktitle = {Proceedings of GlobalSip},
  year = {2017},
  note = {To appear},
}

Budget Restricted Incremental Learning with Pre-Trained Convolutional Neural Networks and Binary Associative Memories

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel et M. Jezequel, "Budget Restricted Incremental Learning with Pre-Trained Convolutional Neural Networks and Binary Associative Memories," dans Proceedings of SIPS, 2017. À paraître.

Thanks to their ability to absorb large amounts of data, Convolutional Neural Networks (CNNs) have become state-of-the-art in numerous vision challenges, sometimes even on par with biological vision. They rely on optimisation routines that typically require intensive computational power, thus the question of embedded architectures is a very active field of research. Of particular interest is the problem of incremental learning, where the device adapts to new observations or classes. To tackle this challenging problem, we propose to combine pre-trained CNNs with binary associative memories, using product random sampling as an intermediate between the two methods. The obtained architecture requires significantly less computational power and memory usage than existing counterparts. Moreover, using various challenging vision datasets we show that the proposed architecture is able to perform one-shot learning – and even use only a small portion of the dataset – while keeping very good accuracy.

Télécharger le manuscrit.

Bibtex
@inproceedings{HacGriFarArzJez2017,
  author = {Ghouthi Boukli Hacene and Vincent Gripon
and Nicolas Farrugia and Matthieu Arzel and Michel
Jezequel},
  title = {Budget Restricted Incremental Learning with
Pre-Trained Convolutional Neural Networks and Binary
Associative Memories},
  booktitle = {Proceedings of SIPS},
  year = {2017},
  note = {To appear},
}

Tropical Graph Signal Processing

V. Gripon, "Tropical Graph Signal Processing," dans Proceedings of the Asilomar conference, octobre 2017. À paraître.

For the past few years, the domain of graph signal processing has extended classical Fourier analysis to domains described by graphs. Most of the results were obtained by analogy with the study of heat propagation. We propose to perform a similar analysis in the context of tropical algebra, widely used in theoretical computer science to monitor propagation processes over graphs of distances. We introduce a Tropical Graph Fourier Transform and prove a few results on graph inference and the existence of a tropical uncertainty principle.

Télécharger le manuscrit.

Bibtex
@inproceedings{Gri201710,
  author = {Vincent Gripon},
  title = {Tropical Graph Signal Processing},
  booktitle = {Proceedings of the Asilomar
conference},
  year = {2017},
  month = {October},
  note = {To appear},
}

An Intrinsic Difference Between Vanilla RNNs and GRU Models

T. Stérin, N. Farrugia et V. Gripon, "An Intrinsic Difference Between Vanilla RNNs and GRU Models," dans Proceedings of Cognitive, pp. 76--81, février 2017.

In order to perform well in practice, Recurrent Neural Networks (RNN) require computationally heavy architectures, such as Gated Recurrent Unit (GRU) or Long Short Term Memory (LSTM). Indeed, the original Vanilla model fails to encapsulate middle and long term sequential dependencies. The aim of this paper is to show that gradient training issues, which have motivated the introduction of LSTM and GRU models, are not sufficient to explain the failure of the simplest RNN. Using the example of Reber’s grammar, we propose an experimental measure of both Vanilla and GRU models, which suggest an intrinsic difference in their dynamics. A better mathematical understanding of this difference could lead to more efficient models without compromising performance.

Télécharger le manuscrit.

Bibtex
@inproceedings{StFarGri201702,
  author = {Tristan Stérin and Nicolas Farrugia and
Vincent Gripon},
  title = {An Intrinsic Difference Between Vanilla
RNNs and GRU Models},
  booktitle = {Proceedings of Cognitive},
  year = {2017},
  pages = {76--81},
  month = {February},
}

Finding All Matches in a Database using Binary Neural Networks

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel et M. Jezequel, "Finding All Matches in a Database using Binary Neural Networks," dans Proceedings of Cognitive, pp. 59--64, février 2017.

The most efficient architectures of associative memories are based on binary neural networks. As example, Sparse Clustered Networks (SCNs) are able to achieve almost optimal memory efficiency while providing robust indexation of pieces of information through cliques in a neural network. In the canonical formulation of the associative memory problem, the unique stored message matching a given input probe is to be retrieved. In this paper, we focus on the more general problem of finding all messages matching the given probe. We consider real datasets from which many different messages can match given probes, which cannot be done with uniformly distributed messages due to their unlikelyhood of sharing large common parts with one another. Namely, we implement a crossword dictionary containing 8-letter english words, and a chess endgame dataset using associative memories based on binary neural networks. We explain how to adapt SCNs’ architecture to this challenging dataset and introduce a backtracking procedure to retrieve all completions of the given input. We stress the performance of the proposed method using different measures and discuss the importance of parameters.

Télécharger le manuscrit.

Bibtex
@inproceedings{HacGriFarArzJez201702,
  author = {Ghouthi Boukli Hacene and Vincent Gripon
and Nicolas Farrugia and Matthieu Arzel and Michel
Jezequel},
  title = {Finding All Matches in a Database using
Binary Neural Networks},
  booktitle = {Proceedings of Cognitive},
  year = {2017},
  pages = {59--64},
  month = {February},
}

Performance of Neural Clique Networks Subject to Synaptic Noise

E. Coyac, V. Gripon, C. Langlais et C. Berrou, "Performance of Neural Clique Networks Subject to Synaptic Noise," dans Proceedings of Cognitive, pp. 4--9, février 2017.

Abstract—Artificial neural networks are so-called because they are supposed to be inspired from the brain and from the ways the neurons work. While some networks are used purely for computational purpose and do not endeavor to be a plausible representation of what happens in the brain, such as deep learning neural networks, others do. However, the question of the noise in the brain and its impact on the functioning of those networks has been little-studied. For example, it is widely known that synapses misfire with a significant probability. We model this noise and study its impact on associative memories powered by neural networks: neural clique networks and Hopfield networks as a reference point. We show that synaptic noise can in fact slightly improve the performance of the decoding process of neural clique networks by avoiding local minima.

Télécharger le manuscrit.

Bibtex
@inproceedings{CoyGriLanBer201702,
  author = {Eliott Coyac and Vincent Gripon and
Charlotte Langlais and Claude Berrou},
  title = {Performance of Neural Clique Networks
Subject to Synaptic Noise},
  booktitle = {Proceedings of Cognitive},
  year = {2017},
  pages = {4--9},
  month = {February},
}

Neighborhood-Preserving Translations on Graphs

N. Grelier, B. Pasdeloup, J. Vialatte et V. Gripon, "Neighborhood-Preserving Translations on Graphs," dans Proceedings of GlobalSIP, pp. 410--414, octobre 2016.

In many domains (e.g. Internet of Things, neuroimaging) signals are naturally supported on graphs. These graphs usually convey information on similarity between the values taken by the signal at the corresponding vertices. An interest of using graphs is that it allows to define ad hoc operators to perform signal processing. Among them, ones of paramount importance in many tasks are translations. In this paper we propose new definitions of translations on graphs using a few simple properties. Namely we propose to define translations as functions from vertices to adjacent ones, that preserve neighborhood properties of the graph. We show that our definitions, contrary to other works on the subject, match usual translations on grid graphs.


Bibtex
@inproceedings{GrePasViaGri201610,
  author = {Nicolas Grelier and Bastien Pasdeloup and
Jean-Charles Vialatte and Vincent Gripon},
  title = {Neighborhood-Preserving Translations on
Graphs},
  booktitle = {Proceedings of GlobalSIP},
  year = {2016},
  pages = {410--414},
  month = {October},
}

Assembly Output Codes for Learning Neural Networks

P. Tigréat, C. R. K. Lassance, X. Jiang, V. Gripon et C. Berrou, "Assembly Output Codes for Learning Neural Networks," dans Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 285--289, septembre 2016.

Neural network-based classifiers usually encode the class labels of input data via a completely disjoint code, i.e. a binary vector with only one bit associated with each category. We use coding theory to propose assembly codes where each element is associated with several classes, making for better target vectors. These codes emulate the combination of several classifiers, which is a well-known method to improve decision accuracy. Our experiments on data-sets such as MNIST with a multi-layer neural network show that assembly output codes, which are characterized by a higher minimum Hamming distance, result in better classification performance. These codes are also well suited to the use of clustered clique-based networks in category representation.

Télécharger le manuscrit.

Bibtex
@inproceedings{TigLasJiaGriBer20169,
  author = {Philippe Tigréat and Carlos Rosar Kos
Lassance and Xiaoran Jiang and Vincent Gripon and
Claude Berrou},
  title = {Assembly Output Codes for Learning Neural
Networks},
  booktitle = {Proceedings of the 9th International
Symposium on Turbo Codes and Iterative Information
Processing},
  year = {2016},
  pages = {285--289},
  month = {September},
}

A Turbo-Inspired Iterative Approach for Correspondence Problems of Image Features

A. Aboudib, V. Gripon et G. Coppin, "A Turbo-Inspired Iterative Approach for Correspondence Problems of Image Features," dans Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 226--230, septembre 2016.

Establishing correspondences between image features is a fundamental problem in many computer vision tasks. It is traditionally viewed as a graph matching problem, and solved using an optimization procedure. In this paper, we propose a new approach to solving the correspondence problem from a coding/decoding perspective. We then present an iterative matching algorithm inspired from the turbo-decoding concept. We provide an experimental evaluation of the proposed method, and show that it performs better than state-of-the-art algorithms in the presence of clutter, thanks to turbo-style decoding.

Télécharger le manuscrit.

Bibtex
@inproceedings{AboGriCop20169,
  author = {Ala Aboudib and Vincent Gripon and Gilles
Coppin},
  title = {A Turbo-Inspired Iterative Approach for
Correspondence Problems of Image Features},
  booktitle = {Proceedings of the 9th International
Symposium on Turbo Codes and Iterative Information
Processing},
  year = {2016},
  pages = {226--230},
  month = {September},
}

Distributed Coding and Synaptic Pruning

E. Coyac, V. Gripon, C. Langlais et C. Berrou, "Distributed Coding and Synaptic Pruning," dans Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 206--210, septembre 2016.

Abstract—This paper deals with the modelization of synaptic pruning in the developing brain, at the informational level. Relying on the hypotheses of clique-based memory and Hebbian learning, we consider several scenarios to try to understand how reliable point-to-point communication may be achieved in the cortex whereas at birth, neurons are connected to a multitude of other neurons, similar to a connection of broadcast type. It is shown that quasi-perfect transfer of information can be obtained in a plausible way using simple rules if a mechanism of synaptic normalization is implemented at the receiver side.

Télécharger le manuscrit.

Bibtex
@inproceedings{CoyGriLanBer20169,
  author = {Eliott Coyac and Vincent Gripon and
Charlotte Langlais and Claude Berrou},
  title = {Distributed Coding and Synaptic Pruning},
  booktitle = {Proceedings of the 9th International
Symposium on Turbo Codes and Iterative Information
Processing},
  year = {2016},
  pages = {206--210},
  month = {September},
}

Nearest Neighbour Search Using Binary Neural Networks

D. Ferro, V. Gripon et X. Jiang, "Nearest Neighbour Search Using Binary Neural Networks," dans Proceedings of IJCNN, pp. 5106--5112, juillet 2016.

The problem of finding nearest neighbours in terms of Euclidean distance, Hamming distance or other distance metric is a very common operation in computer vision and pattern recognition. In order to accelerate the search for the nearest neighbour in large collection datasets, many methods rely on the coarse-fine approach. In this paper we propose to combine Product Quantization (PQ) and binary neural associative memories to perform the coarse search. Our motivation lies in the fact that neural network dimensions of the representation associated with a set of k vectors is independent of k. We run experiments on TEXMEX SIFT1M and MNIST databases and observe significant improvements in terms of complexity of the search compared to raw PQ.

Télécharger le manuscrit.

Bibtex
@inproceedings{FerGriJia201607,
  author = {Demetrio Ferro and Vincent Gripon and
Xiaoran Jiang},
  title = {Nearest Neighbour Search Using Binary
Neural Networks},
  booktitle = {Proceedings of IJCNN},
  year = {2016},
  pages = {5106--5112},
  month = {July},
}

Impact du bruit synaptique sur les performances des réseaux de cliques neurales

E. Coyac, V. Gripon et C. Langlais, "Impact du bruit synaptique sur les performances des réseaux de cliques neurales," dans Actes de la conférence GRETSI, 2015.

Les réseaux de neurones s’inspirent du fonctionnement et de la structure du cerveau, et l’argument de la plausibilité biologique est souvent utilisé pour défendre ou critiquer un modèle par rapport à un autre. La littérature neurobiologique nous renseigne sur le fait que le cerveau est un système bruité, dans lequel les composants et leurs connexions sont non fiables. Pourtant, le comportement des réseaux de neurones face à ce bruit a rarement été étudié. Ce papier propose un modèle de bruit cérébral et analyse son impact sur les performances des réseaux de cliques neurales. Nous montrons que le bruit peut les améliorer, notamment en évitant des minima locaux. Nous analysons également l’impact de ce bruit sur les performances des réseaux de Hopfield.

Télécharger le manuscrit.

Bibtex
@inproceedings{CoyGriLan2015,
  author = {Eliott Coyac and Vincent Gripon and
Charlotte Langlais},
  title = {Impact du bruit synaptique sur les
performances des réseaux de cliques neurales},
  booktitle = {Proceedings of the GRETSI conference},
  year = {2015},
}

Réseaux de Clusters de Neurones Restreints

R. Danilo, V. Gripon, P. Coussy et L. Conde-Canencia, "Réseaux de Clusters de Neurones Restreints," dans Actes de la conférence GRETSI, 2015.

Les mémoires associatives sont capables de restituer un message précédemment stocké lorsqu’ une version incomplète de celui-ci leur est présentée. Un modèle de mémoire associative basé sur des neurones et des connexions binaires, nommé Réseau de Clusters de Neurones (RCN), a été récemment introduit. Les performances de ce modèle chutent lorsque les messages stockés suivent une loi de distribution non-uniforme. L’objectif de cet article est de proposer un nouveau modèle de mémoire associative inspiré des RCNs et des machines de Boltzmann restreintes, afin de diminuer la vulnérabilité à la non-uniformité des messages. Une mise en œuvre matérielle de ce modèle est également présentée dans cet article. Cette dernière permet de multiplier le nombre de messages stockés par 3, avec une augmentation de la complexité matérielle de seulement 40%.

Télécharger le manuscrit.

Bibtex
@inproceedings{DanGriCouCon2015,
  author = {Robin Danilo and Vincent Gripon and
Philippe Coussy and Laura Conde-Canencia},
  title = {Réseaux de Clusters de Neurones
Restreints},
  booktitle = {Proceedings of the GRETSI conference},
  year = {2015},
}

Vers une caractérisation de la courbe d'incertitude pour des graphes portant des signaux

B. Pasdeloup, V. Gripon, G. Mercier et D. Pastor, "Vers une caractérisation de la courbe d'incertitude pour des graphes portant des signaux," dans Actes de la conférence GRETSI, 2015.

Le traitement de signal sur graphes est un domaine récent visant à généraliser les outils classiques du traitement de signal, afin d’analyser des signaux évoluant sur des domaines complexes. Ces domaines sont représentés par des graphes pour lesquels on peut calculer une matrice appelée Laplacien normalisé [3]. Il a été montré que les valeurs propres de ce Laplacien correspondent aux fréquences du domaine de Fourier en traitement de signal classique [2]. Ainsi, le domaine fréquentiel n’est pas identique pour tout graphe support des signaux. Une conséquence est qu’il n’y a pas de généralisation non triviale du principe d’incertitude d’Heisenberg, indiquant qu’un signal ne peut être à la fois localisé dans le domaine temporel et dans le domaine fréquentiel. Une manière de généraliser ce principe, introduite par Agaskar & Lu en [1], consiste à déterminer une courbe servant de borne inférieure au compromis entre précision dans le domaine du graphe et précision dans le domaine spectral. L’objectif de ce papier est de proposer une caractérisation des signaux atteignant cette courbe, pour une classe de graphes plus générique que celle étudiée par Agaskar & Lu.

Télécharger le manuscrit.

Bibtex
@inproceedings{PasGriMerPas2015,
  author = {Bastien Pasdeloup and Vincent Gripon and
Grégoire Mercier and Dominique Pastor},
  title = {Vers une caractérisation de la courbe
d'incertitude pour des graphes portant des signaux},
  booktitle = {Proceedings of the GRETSI conference},
  year = {2015},
}

SimNet: A new algorithm for measuring brain networks similarity

A. Mheich, M. Hassan, F. Wendling, M. Khalil, O. Dufor, V. Gripon et C. Berrou, "SimNet: A new algorithm for measuring brain networks similarity," dans Proceedings of the ICABME international conference, pp. 119--122, 2015.

Measuring similarity among graphs is recognized as a non-trivial problem. Most of the algorithms proposed so far ignore the spatial location of vertices, which is a crucial factor in the context of brain networks. In this paper, we present a novel algorithm, called “SimNet”, for measuring the similarity between two graphs whose vertices represent the position of sources over the cortex. The novelty is to account for differences at the level of spatially-registered vertices and edges. Simulated graphs are used to evaluate the algorithm performance and to compare it with methods reported elsewhere. Results show that SimNet is able to quantify the similarity between two graphs under a spatial constraint based on the 3D location of edges. The application of SimNet on real data (dense EEG) reveals the presence of spatially-different brain networks modules activating during cognitive activity.

Télécharger le manuscrit.

Bibtex
@inproceedings{MheHasWenKhaDufGriBer2015,
  author = {A. Mheich and M. Hassan and F. Wendling
and M. Khalil and O. Dufor and V. Gripon and C.
Berrou},
  title = {SimNet: A new algorithm for measuring brain
networks similarity},
  booktitle = {Proceedings of the ICABME international
conference},
  year = {2015},
  pages = {119--122},
}

Graph Reconstruction from the Observation of Diffused Signals

B. Pasdeloup, M. Rabbat, V. Gripon, D. Pastor et G. Mercier, "Graph Reconstruction from the Observation of Diffused Signals," dans Proceedings of the 53rd Allerton Conference, pp. 1386--1390, octobre 2015.

Signal processing on graphs has received a lot of attention in the recent years. A lot of techniques have arised, inspired by classical signal processing ones, to allow studying signals on any kind of graph. A common aspect of these technique is that they require a graph correctly modeling the studied support to explain the signals that are observed on it. However, in many cases, such a graph is unavailable or has no real physical existence. An example of this latter case is a set of sensors randomly thrown in a field which obviously observe related information. To study such signals, there is no intuitive choice for a support graph. In this document, we address the problem of inferring a graph structure from the observation of signals, under the assumption that they were issued of the diffusion of initially i.i.d. signals. To validate our approach, we design an experimental protocol, in which we diffuse signals on a known graph. Then, we forget the graph, and show that we are able to retrieve it very precisely from the only knowledge of the diffused signals.

Télécharger le manuscrit.

Bibtex
@inproceedings{PasRabGriPasMer201510,
  author = {Bastien Pasdeloup and Michael Rabbat and
Vincent Gripon and Dominique Pastor and Gregoire
Mercier},
  title = {Graph Reconstruction from the Observation
of Diffused Signals},
  booktitle = {Proceedings of the 53rd Allerton
Conference},
  year = {2015},
  pages = {1386--1390},
  month = {October},
}

Restricted Clustered Neural Network for Storing Real Data

R. Danilo, V. Gripon, P. Coussy, L. Conde-Canencia et W. J. Gross, "Restricted Clustered Neural Network for Storing Real Data," dans proceedings of GLSVLSI conference, pp. 205--210, mai 2015.

Associative memories are an alternative to classical indexed memories that are capable of retrieving a message previously stored when an incomplete version of this message is presented. Recently a new model of associative memory based on binary neurons and binary links has been proposed. This model named Clustered Neural Network (CNN) offers large storage diversity (number of messages stored) and fast message retrieval when implemented in hardware. The performance of this model drops when the stored message distribution is non-uniform. In this paper, we enhance the CNN model to support non-uniform message distribution by adding features of Restricted Boltzmann Machines. In addition, we present a fully parallel hardware design of the model. The proposed implementation multiplies the performance (diversity) of Clustered Neural Networks by a factor of 3 with an increase of complexity of 40%.

Télécharger le manuscrit.

Bibtex
@inproceedings{DanGriCouConGro20155,
  author = {Robin Danilo and Vincent Gripon and
Philippe Coussy and Laura Conde-Canencia and Warren J.
Gross},
  title = {Restricted Clustered Neural Network for
Storing Real Data},
  booktitle = {proceedings of GLSVLSI conference},
  year = {2015},
  pages = {205--210},
  month = {May},
}

Algorithm and Implementation of an Associative Memory for Oriented Edge Detection Using Improved Clustered Neural Networks

R. Danilo, H. Jarollahi, V. Gripon, P. Coussy, L. Conde-Canencia et W. J. Gross, "Algorithm and Implementation of an Associative Memory for Oriented Edge Detection Using Improved Clustered Neural Networks," dans Proceedings of ISCAS Conference, pp. 2501--2504, mai 2015.

Associative memories are capable of retrieving previously stored patterns given parts of them. This feature makes them good candidates for pattern detection in images. Clustered Neural Networks is a recently-introduced family of associative memories that allows a fast pattern retrieval when implemented in hardware. In this paper, we propose a new pattern retrieval algorithm that results in a dramatically lower error rate compared to that of the conventional approach when used in oriented edge detection process. This function plays an important role in image processing. Furthermore, we present the corresponding hardware architecture and implementation of the new approach in comparison with a conventional architecture in literature, and show that the proposed architecture does not significantly affect hardware complexity.

Télécharger le manuscrit.

Bibtex
@inproceedings{DanJarGriCouConGro20155,
  author = {Robin Danilo and Homman Jarollahi and
Vincent Gripon and Philippe Coussy and Laura
Conde-Canencia and Warren J. Gross},
  title = {Algorithm and Implementation of an
Associative Memory for Oriented Edge Detection Using
Improved Clustered Neural Networks},
  booktitle = {Proceedings of ISCAS Conference},
  year = {2015},
  pages = {2501--2504},
  month = {May},
}

A Model of Bottom-Up Visual Attention Using Cortical Magnification

A. Aboudib, V. Gripon et G. Coppin, "A Model of Bottom-Up Visual Attention Using Cortical Magnification," dans Proceedings of ICASSP, pp. 1493--1497, avril 2015.

The focus of visual attention has been argued to play a key role in object recognition. Many computational models of visual attention were proposed to estimate locations of eye fixations driven by bottom-up stimuli. Most of these models rely on pyramids consisting of multiple scaled versions of the visual scene. This design aims at capturing the fact that neural cells in higher visual areas tend to have larger receptive fields (RFs). On the other hand, very few models represent multi-scaling resulting from the eccentricity-dependent RF sizes within each visual layer, also known as the cortical magnification effect. In this paper, we demonstrate that using a cortical-magnification-like mechanism can lead to performant alternatives to pyramidal approaches in the context of attentional modeling. Moreover, we argue that introducing such a mechanism equips the proposed model with additional properties related to overt attention and distance-dependent saliency that are worth exploring.

Télécharger le manuscrit.

Bibtex
@inproceedings{AboGriCop20154,
  author = {Ala Aboudib and Vincent Gripon and Gilles
Coppin},
  title = {A Model of Bottom-Up Visual Attention Using
Cortical Magnification},
  booktitle = {Proceedings of ICASSP},
  year = {2015},
  pages = {1493--1497},
  month = {April},
}

A novel algorithm for measuring graph similarity: application to brain networks

A. Mheich, M. Hassan, V. Gripon, O. Dufor, M. Khalil, C. Berrou et F. Wendling, "A novel algorithm for measuring graph similarity: application to brain networks," dans Proceedings of the IEEE EMBS Neural Engineering Conference, pp. 1068--1071, avril 2015.

Measuring the similarity among graphs is a challenging issue in many disciplines including neuroscience. Several algorithms, mainly based on vertices or edges properties, were proposed to address this issue. Most of them ignore the physical location of the vertices, which is a crucial factor in the analysis of brain networks. Indeed, functional brain networks are usually represented as graphs composed of vertices (brain regions) connected by edges (functional connectivity). In this paper, we propose a novel algorithm to measure a similarity between graphs. The novelty of our approach is to account for vertices, edges and spatiality at the same time. The proposed algorithm is evaluated using synthetic graphs. It shows high ability to detect and measure similarity between graphs. An application to real functional brain networks is then described. The algorithm allows for quantification of the inter-subjects variability during a picture naming task.

Télécharger le manuscrit.

Bibtex
@inproceedings{MheHasGriDufKhaBerWen20154,
  author = {A. Mheich and M. Hassan and V. Gripon and
O. Dufor and M. Khalil and C. Berrou and F. Wendling},
  title = {A novel algorithm for measuring graph
similarity: application to brain networks},
  booktitle = {Proceedings of the IEEE EMBS Neural
Engineering Conference},
  year = {2015},
  pages = {1068--1071},
  month = {April},
}

Implementing Relational-Algebraic Operators for Improving Cognitive Abilities in Networks of Neural Cliques

A. Aboudib, V. Gripon et B. Tessiau, "Implementing Relational-Algebraic Operators for Improving Cognitive Abilities in Networks of Neural Cliques," dans Proceedings of Cognitive, pp. 36--41, mars 2015.

Associative memories are devices capable of retrieving previously stored messages from parts of their content. They are used in a variety of applications including CPU caches, routers, intrusion detection systems, etc. They are also considered a good model for human memory, motivating the use of neural-based techniques. When it comes to cognition, it is important to provide such devices with the ability to perform complex requests, such as union, intersection, difference, projection and selection. In this paper, we extend a recently introduced associative memory model to perform relational algebra operations. We introduce new algorithms and discuss their performance which provides an insight on how the brain performs some high-level information processing tasks.

Télécharger le manuscrit.

Bibtex
@inproceedings{AboGriTes20153,
  author = {Ala Aboudib and Vincent Gripon and
Baptiste Tessiau},
  title = {Implementing Relational-Algebraic Operators
for Improving Cognitive Abilities in Networks of
Neural Cliques},
  booktitle = {Proceedings of Cognitive},
  year = {2015},
  pages = {36--41},
  month = {March},
}

Neural Associative Memories as Accelerators for Binary Vector Search

C. Yu, V. Gripon, X. Jiang et H. Jégou, "Neural Associative Memories as Accelerators for Binary Vector Search," dans Proceedings of Cognitive, pp. 85--89, mars 2015.

Associative memories aim at matching an input noisy vector with a stored one. The matched vector satisfies a minimum distance criterion with respect to the inner metric of the device. This problem of finding nearest neighbors in terms of Euclidean or Hamming distances is a very common operation in machine learning and pattern recognition. However, the inner metrics of associative memories are often misfitted to handle practical scenarios. In this paper, we adapt Willshaw networks in order to use them for accelerating nearest neighbor search with limited impact on accuracy. We provide a theoretical analysis of our method for binary sparse vectors. We also test our method using the MNIST handwritten digits database. Both our analysis for synthetic data and experiments with real-data evidence a significant gain in complexity with negligible loss in performance compared to exhaustive search.

Télécharger le manuscrit.

Bibtex
@inproceedings{YuGriJiaJé20153,
  author = {Chendi Yu and Vincent Gripon and Xiaoran
Jiang and Hervé Jégou},
  title = {Neural Associative Memories as Accelerators
for Binary Vector Search},
  booktitle = {Proceedings of Cognitive},
  year = {2015},
  pages = {85--89},
  month = {March},
}

Using Tags to Improve Diversity of Sparse Associative Memories

S. Larroque, E. S. Gooya, V. Gripon et D. Pastor, "Using Tags to Improve Diversity of Sparse Associative Memories," dans Proceedings of Cognitive, pp. 1--7, mars 2015.

Associative memories, a classical model for brain long-term memory, face interferences between old and new memories. Usually, the only remedy is to enlarge the network so as to retain more memories without collisions: this is the network’s size–diversity trade-off. We propose a novel way of representing data in these networks to provide another mean to extend diversity without resizing the network. We show from our analysis and simulations that this method is a viable alternative, which can perfectly fit cases where network’s size is constrained such as neuromorphic FPGA boards implementing associative memories.

Télécharger le manuscrit.

Bibtex
@inproceedings{LarGooGriPas20153,
  author = {Stephen Larroque and Ehsan Sedgh Gooya and
Vincent Gripon and Dominique Pastor},
  title = {Using Tags to Improve Diversity of Sparse
Associative Memories},
  booktitle = {Proceedings of Cognitive},
  year = {2015},
  pages = {1--7},
  month = {March},
}

Automatic face recognition using SIFT and networks of tagged neural cliques

E. S. Gooya, D. Pastor et V. Gripon, "Automatic face recognition using SIFT and networks of tagged neural cliques," dans Proceedings of Cognitive, pp. 57--61, mars 2015.

Bearing information by a fully interconnected subgraphs is recently improved in the neural network of cliques. In this paper, a face recognition system is presented using such networks where local descriptors are used to perform feature extraction. In the wide range of possible image descriptors for face recognition, we focus specifically on the Scale Invariant Feature Transform (SIFT). In contrast to standard methods, our proposed method requires no empirically chosen threshold. Moreover, it performs matching between sets of features, in addition to individual feature matching. Thus, we favor joint occurrences of descriptors during the recognition process. We compare our approach to state of the art face recognition systems based on SIFT descriptors. The evaluation is carried out on the Olivetti and Oracle Research Laboratory (ORL) face database, whose diversity is significant for assessing face recognition methods.

Prix du meilleur papier

Télécharger le manuscrit.

Bibtex
@inproceedings{GooPasGri20153,
  author = {Ehsan Sedgh Gooya and Dominique Pastor and
Vincent Gripon},
  title = {Automatic face recognition using SIFT and
networks of tagged neural cliques},
  booktitle = {Proceedings of Cognitive},
  year = {2015},
  pages = {57--61},
  month = {March},
}

Sparse Binary Matrices as Efficient Associative Memories

V. Gripon, V. Skachek et M. Rabbat, "Sparse Binary Matrices as Efficient Associative Memories," dans Proceedings of the 52nd Allerton conference, pp. 499-504, octobre 2014.

Associative memories are widely used devices which can be viewed as universal error-correcting decoders. Employing error-correcting code principles in these devices has allowed to greatly enhance their performance. In this paper we reintroduce a neural-based model using the formalism of linear algebra and extend its functionality, originally limited to erasure retrieval, to handle approximate inputs. In order to perform the retrieval, we use an iterative algorithm that provably converges. We then analyze the performance of the associative memory under the assumption of connection independence. We supportour theoretical results with numerical simulations.

Télécharger le manuscrit.

Bibtex
@inproceedings{GriSkaRab201410,
  author = {Vincent Gripon and Vitaly Skachek and
Michael Rabbat},
  title = {Sparse Binary Matrices as Efficient
Associative Memories},
  booktitle = {Proceedings of the 52nd Allerton
conference},
  year = {2014},
  pages = {499-504},
  month = {October},
}

Algorithm and Architecture for a Multiple-Field Context-Driven Search Engine Using Fully-Parallel Clustered Associative Memories

H. Jarollahi, N. Onizawa, V. Gripon, T. Hanyu et W. J. Gross, "Algorithm and Architecture for a Multiple-Field Context-Driven Search Engine Using Fully-Parallel Clustered Associative Memories," dans Proceedings of SiPS, pp. 1--6, octobre 2014.

In this paper, a context-driven search engine is presented based on a new family of associative memories. It stores only the associations between items from multiple search fields in the form of binary links, and merges repeated field items to reduce the memory requirements. It achieves 13.6× reduction in memory bits and accesses, and 8.6× reduced number of clock cycles in search operation compared to a classical field-based search structure using content-addressable memory. Furthermore, using parallel computational nodes in the proposed search engine, it achieves five orders of magnitude reduced number of clock cycles compared to a CPU-based counterpart running a classical search algorithm in software.

Télécharger le manuscrit.

Bibtex
@inproceedings{JarOniGriHanGro201410,
  author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Takahiro Hanyu and Warren J.
Gross},
  title = {Algorithm and Architecture for a
Multiple-Field Context-Driven Search Engine Using
Fully-Parallel Clustered Associative Memories},
  booktitle = {Proceedings of SiPS},
  year = {2014},
  pages = {1--6},
  month = {October},
}

Information, Noise, Coding, Modulation: What about the Brain?

C. Berrou, O. Dufor, V. Gripon et X. Jiang, "Information, Noise, Coding, Modulation: What about the Brain?," dans Proceedings of the 8th symposium on Turbo Codes and Iterative Information Processing, pp. 167--172, août 2014.

At the microscopic level, the brain is fundamentally a matter of physics and chemistry, as all the components of the universe are. At the macroscopic scale, behavior, psychology and affects are the main dimensions of its life. To convert atoms and molecules into intelligence, some kind of information has to be fixed in the grey matter of the cerebral cortex. The way this "mental information" is materialized and processed is still an enigma, probably the most puzzling problem addressed to science nowadays. At this mesoscopic level of the brain functioning, the concepts to consider are likely the same as those considered in communication and information theory, mainly information, noise, coding and modulation. This paper proposes some ideas that could help understand some features of the brain in an information-processing perspective.

Télécharger le manuscrit.

Bibtex
@inproceedings{BerDufGriJia20148,
  author = {Claude Berrou and Olivier Dufor and
Vincent Gripon and Xiaoran Jiang},
  title = {Information, Noise, Coding, Modulation:
What about the Brain?},
  booktitle = {Proceedings of the 8th symposium on
Turbo Codes and Iterative Information Processing},
  year = {2014},
  pages = {167--172},
  month = {August},
}

A GPU-based Associative Memory using Sparse Neural Networks

Z. Yao, V. Gripon et M. Rabbat, "A GPU-based Associative Memory using Sparse Neural Networks," dans Proceedings of the PCNN-14 conference, pp. 688--692, juillet 2014.

Associative memories, serving as building blocks for a variety of algorithms, store content in such a way that it can be later retrieved by probing the memory with a small portion of it, rather than with an address as in more traditional memories. Recently, Gripon and Berrou have introduced a novel construction which builds on ideas from the theory of error correcting codes, greatly outperforming the celebrated Hopfield Neural Networks in terms of the number of stored messages per neuron and the number of stored bits per synapse. The work of Gripon and Berrou proposes two retrieval rules, SUM-OF-SUM and SUM-OF-MAX. In this paper, we implement both rules on a general purpose graphical processing unit (GPU). SUM-OF-SUMuses only matrix-vector multiplication and is easily implemented on the GPU, whereas SUM-OF-MAX, which involves non-linear operations, is much less straightforward to fulfill. However, SUM-OF-MAX gives significantly better retrieval error rates. We propose a hybrid scheme tailored for implementation on a GPU which achieves a 880-fold speedup without sacrificing any accuracy.

Télécharger le manuscrit.

Bibtex
@inproceedings{YaoGriRab20147,
  author = {Zhe Yao and Vincent Gripon and Michael
Rabbat},
  title = {A GPU-based Associative Memory using Sparse
Neural Networks},
  booktitle = {Proceedings of the PCNN-14 conference},
  year = {2014},
  pages = {688--692},
  month = {July},
}

Huffman Coding for Storing Non-uniformly Distributed Messages in Networks of Neural Cliques

B. Boguslawski, V. Gripon, F. Seguin et F. Heitzmann, "Huffman Coding for Storing Non-uniformly Distributed Messages in Networks of Neural Cliques," dans proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, volume 1, pp. 262--268, juillet 2014.

A new family of sparse neural networks achieving nearly optimal performance has been recently introduced. In these networks, messages are stored as cliques in clustered graphs. In this paper, we interpret these networks using the formalism of error correcting codes. To achieve this, we introduce two original codes, the thrifty code and the clique code, that are both sub-families of binary constant weight codes. We also provide the networks with an enhanced retrieving rule that enables a property of answer correctness and that improves performance.

Télécharger le manuscrit.

Bibtex
@inproceedings{BogGriSegHei20147,
  author = {Bartosz Boguslawski and Vincent Gripon and
Fabrice Seguin and Frédéric Heitzmann},
  title = {Huffman Coding for Storing Non-uniformly
Distributed Messages in Networks of Neural Cliques},
  booktitle = {proceedings of the Twenty-Eighth AAAI
Conference on Artificial Intelligence, volume 1},
  year = {2014},
  pages = {262--268},
  month = {July},
}

A study of retrieval algorithms of sparse messages in networks of neural cliques

A. Aboudib, V. Gripon et X. Jiang, "A study of retrieval algorithms of sparse messages in networks of neural cliques," dans Proceedings of Cognitive 2014, pp. 140--146, mai 2014.

Associative memories are data structures addressed using part of the content rather than an index. They offer good fault reliability and biological plausibility. Among different families of associative memories, sparse ones are known to offer the best efficiency (ratio of the amount of bits stored to that of bits used by the network itself). Their retrieval process performance has been shown to benefit from the use of iterations. In this paper, we introduce several algorithms to enhance the performance of the retrieval process in recently proposed sparse associative memories based on binary neural networks. We show that these algorithms provide better performance than existing techniques and discuss their biological plausibility. We also analyze the required number of iterations and derive corresponding curves.

Télécharger le manuscrit.

Bibtex
@inproceedings{AboGriJia20145,
  author = {Ala Aboudib and Vincent Gripon and Xiaoran
Jiang},
  title = {A study of retrieval algorithms of sparse
messages in networks of neural cliques},
  booktitle = {Proceedings of Cognitive 2014},
  year = {2014},
  pages = {140--146},
  month = {May},
}

Towards a Spectral Characterization of Signals Supported on Small-World Networks

M. Rabbat et V. Gripon, "Towards a Spectral Characterization of Signals Supported on Small-World Networks," dans ICASSP, pp. 4793--4797, mai 2014.

We study properties of the family of small-world random graphs introduced in Watts & Strogatz (1998), focusing on the spectrum of the normalized graph Laplacian. This spectrum influences the extent to which a signal supported on the vertices of the graph can be simultaneously localized on the graph and in the spectral domain (the surrogate of the frequency domain for signals supported on a graph). This characterization has implications for inferring or interpolating functions supported on such graphs when observations are only available at a subset of nodes.

Télécharger le manuscrit.

Bibtex
@inproceedings{RabGri20145,
  author = {Michael Rabbat and Vincent Gripon},
  title = {Towards a Spectral Characterization of
Signals Supported on Small-World Networks},
  booktitle = {ICASSP},
  year = {2014},
  pages = {4793--4797},
  month = {May},
}

Cluster-based Associative Memories Built From Unreliable Storage

F. Leduc-Primeau, V. Gripon, M. Rabbat et W. Gross, "Cluster-based Associative Memories Built From Unreliable Storage," dans ICASSP, pp. 8370--8374, mai 2014.

We consider associative memories based on clustered graphs that were recently introduced. These memories are almost optimal in terms of the amount of storage they require (efficiency), and allow retrieving messages with low complexity. We study an unreliable implementation of the memory and compare its error rate and storage efficiency with that of a reliable implementation. We present analytical and simulation results that indicate that the proposed memory structure can tolerate a large number of faults at a reasonable cost, thereby making it a good candidate for achieving highly efficient circuit implementations of associative memories.

Télécharger le manuscrit.

Bibtex
@inproceedings{LedGriRabGro20145,
  author = {François Leduc-Primeau and Vincent Gripon
and Michael Rabbat and Warren Gross},
  title = {Cluster-based Associative Memories Built
From Unreliable Storage},
  booktitle = {ICASSP},
  year = {2014},
  pages = {8370--8374},
  month = {May},
}

Sparse Structured Associative Memories as Efficient Set-Membership Data Structures

V. Gripon, V. Skachek et M. G. Rabbat, "Sparse Structured Associative Memories as Efficient Set-Membership Data Structures," dans Actes de la 51ème conférence Allerton, pp. 500--505, octobre 2013.

We study the use of sparse structured associative memories as a memory-efficient and computationally-efficient data structure for representing a set of elements when one wishes to perform set-membership queries and some errors (false positives) are tolerable. Associative memories, when viewed as representing a set, enjoy a number of interesting properties, including that set membership queries can be carried out even when the input (query element) is only partially known or is partially corrupted. The associative memories considered here (initially proposed in [Gripon and Berrou, 2011]) encode the set in the edge structure of a graph. In this paper we generalize this construction to encode the set in the edge structure of a hypergraph. We derive bounds on the false positive rates (the probability that the associative memory erroneously declares that an element is in the stored set when it, in fact, was not). Interestingly, the proposed structures enjoy many of the same properties as Bloom filters (e.g., they have zero false negative rate, the time to perform an insert and lookup does not depend on the number of elements stored, and the false positive rate can be reduced by using additional memory for storage), while also offering the properties of associative memories (allowing for queries on partial or corrupted inputs).

Télécharger le manuscrit.

Bibtex
@inproceedings{GriSkaRab201310,
  author = {Vincent Gripon and Vitaly Skachek and
Michael G. Rabbat},
  title = {Sparse Structured Associative Memories as
Efficient Set-Membership Data Structures},
  booktitle = {Proceedings of the 51st Allerton
conference},
  year = {2013},
  pages = {500--505},
  month = {October},
}

Mémoires associatives pour observations floues

V. Gripon et X. Jiang, "Mémoires associatives pour observations floues," dans Actes du XXIVème colloque Gretsi, septembre 2013.

Nous présentons une extension de fonctionnement de mémoires associatives récemment introduites au cas d'entrées approximatives. Nous analysons les performances et montrons qu'à taux d'erreur fixé, la consommation mémoire est à un facteur constant de l'optimal et la complexité algorithmique reste limitée (quadratique). Ces extensions ouvrent la voie à l'utilisation de mémoires associatives pour effectuer des recherches complexes dans des bases de données avec un temps de réponse faible.

Télécharger le manuscrit.

Bibtex
@inproceedings{GriJia20139,
  author = {Vincent Gripon and Xiaoran Jiang},
  title = {Mémoires associatives pour observations
floues},
  booktitle = {Proceedings of XXIV-th Gretsi seminar},
  year = {2013},
  month = {September},
}

Maximum Likelihood Associative Memories

V. Gripon et M. Rabbat, "Maximum Likelihood Associative Memories," dans Proceedings of Information Theory Workshop, pp. 1--5, septembre 2013.

Les mémoires associatives sont des structures qui permettent de stocker des données de façon à pouvoir ensuite les retrouver à partir d'une fraction de leur contenu -- une sorte de robustesse aux erreurs/effacements. Elles sont utilisées dans des applications allant des caches et gestions de mémoires des processeurs aux moteurs de bases de données. Dans ce papier nous étudions des mémoires associatives construites selon le principe du maximum de vraissemblance. Nous introduisons le taux d'erreur résiduel minimal quand les données sont obtenues à partir d'une source binaire uniforme. Ensuite nous déterminons la quantité minimale de mémoire requise pour stocker ces mêmes données. Enfin nous exprimons une borne sur la complexité algorithmique de la remémoration de message. Nous comparons ces différents résultats avec deux mémoires associatives existantes : le célèbre modèle de Hopfield et une architecture récemment introduite par Gripon et Berrou.

Télécharger le manuscrit.

Bibtex
@inproceedings{GriRab20139,
  author = {Vincent Gripon and Michael Rabbat},
  title = {Maximum Likelihood Associative Memories},
  booktitle = {Proceedings of Information Theory
Workshop},
  year = {2013},
  pages = {1--5},
  month = {September},
}

Reconstructing a Graph from Path Traces

V. Gripon et M. Rabbat, "Reconstructing a Graph from Path Traces," dans Proceedings of International Symposium on Information Theory, pp. 2488-2492, juillet 2013.

Ce document s'intéresse au problème d'inférer la structure d'un réseau à partir d'observations indirectes. Chaque observation (une "trace") est un ensemble non ordonné de noeuds qui s'activent le long d'un chemin du réseau. Puisqu'une trace ne contient aucune information à propos de l'ordre des noeuds dans le chemin, il y a beaucoup d'ordres possibles pour chaque trace, et donc le problème d'inférence d'un réseau à partir de traces est, en général, mal posé. Nous proposons et analysons un algorithme qui insère des arêtes en ordonnant chaque trace afin d'en faire un chemin en fonction des paires de noeuds dans le chemin qui apparaissent le plus fréquemment dans l'ensemble des observations. Lorsque toutes les traces impliquent exactement trois noeuds, nous analysons les conditions nécessaires et suffisantes pour que l'algorithme de reconstruction retrouve exactement le graphe initial. Enfin, pour une famille de graphes aléatoires, nous introduisons les probabilités d'erreur de reconstruction (arêtes faussement ajoutées et manquées).

Télécharger le manuscrit.

Bibtex
@inproceedings{GriRab20137,
  author = {Vincent Gripon and Michael Rabbat},
  title = {Reconstructing a Graph from Path Traces},
  booktitle = {Proceedings of International Symposium
on Information Theory},
  year = {2013},
  pages = {2488-2492},
  month = {July},
}

A Low-Power Content-Adressable-Memory Based on Clustered-Sparse-Networks

H. Jarollahi, V. Gripon, N. Onizawa et W. J. Gross, "A Low-Power Content-Adressable-Memory Based on Clustered-Sparse-Networks," dans Proceedings of 24th International Conference on Application-specific Systems, Architectures and Processors, pp. 642--653, juin 2013.

Une Content-Adressable-Memory (CAM) utilisant un mécanisme nouveau pour associer les données d'entrée aux adresses correspondantes est introduite. L'architecture proposée s'appuie sur des réseaux de neurones binaires compartimentés récemment introduits qui éliminent en moyenne la plupart des comparaisons requises pendant la recherche. De fait, la consommation énergétique du modèle proposé est bien plus faible que celle d'une CAM conventionnelle. Étant donné une entrée, l'arichitecture identifie une poignée de lignes possibles dans une CAM associée et ne réalise les comparaisons que sur celles-ci. En utilisant une technologie CMOS 0.13 um, la consommation énergétique et le temps de réponse sont respectivement 9.5% et 30.4% de ceux d'une architecture NAND classique, pour un surcoût en nombre de transistors de 3.4%.

Télécharger le manuscrit.

Bibtex
@inproceedings{JarGriOniGro20136,
  author = {Hooman Jarollahi and Vincent Gripon and
Naoya Onizawa and Warren J. Gross},
  title = {A Low-Power Content-Adressable-Memory Based
on Clustered-Sparse-Networks},
  booktitle = {Proceedings of 24th International
Conference on Application-specific Systems,
Architectures and Processors},
  year = {2013},
  pages = {642--653},
  month = {June},
}

Reduced-complexity binary-weight-coded associative memories

H. Jarollahi, N. Onizawa, V. Gripon et W. J. Gross, "Reduced-complexity binary-weight-coded associative memories," dans Proceedings of International Conference on Acoustics, Speech, and Signal Processing, pp. 2523--2527, mai 2013.

Les mémoires associatives retrouvent de l'information précédemment stockée étant donné des entrées partielles ou erronées. Récemment, une nouvelle famille de mémoires associatives s'appuyant sur des réseaux de neurones compartimentés a été introduite. Celle-ci peut stocker beaucoup plus de messages que les réseaux de Hopfield classiques. Dans ce document, nous proposons une architecture matérielle de ces mémoires pour des entrées partielles ou erronées. Notre architecture élimine les modules de gagnant-prend-tout et réduit donc la complexité matérielle en consommant 65% des tables FPGA en moins et augmente la fréquence de travail par un facteur d'environs 1.9 par rapport aux travaux précédents.

Télécharger le manuscrit.

Bibtex
@inproceedings{JarOniGriGro20135,
  author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Warren J. Gross},
  title = {Reduced-complexity binary-weight-coded
associative memories},
  booktitle = {Proceedings of International Conference
on Acoustics, Speech, and Signal Processing},
  year = {2013},
  pages = {2523--2527},
  month = {May},
}

Compressing multisets using tries

V. Gripon, M. Rabbat, V. Skachek et W. J. Gross, "Compressing multisets using tries," dans Proceedings of Information Theory Workshop, Lausanne, Suisse, pp. 647--651, septembre 2012.

Nous considérons le problème de représenter sans perte et efficacement des multiensembles de m éléments tirés avec répétitions dans un ensemble de taille 2^n. On est en droit de penser que l'encodage d'un tel multiensemble devrait apporter un grand gain par rapport à celui de la séquence des tirages correspondante, puisque l'information correspondant à l'ordre des tirages dans la séquence représente une permutation. Nous proposons et analysons un encodeur/décodeur basé sur des arbres préfixes. La phase d'encodage a une complexité en O(m(n+log m)) tandis que la phase de décodage a une complexité en O(mn). Nous nous intéressons plus particulièrement au cas où le nombre d'éléments m est 2^n/c, pour c > 1 et n → ∞. Sous ces conditions, et si les éléments du multiensemble sont tirés avec équiprobabilité, nous montrons que l'encodage proposé permet un gain non borné. De plus, l'espérance de la longueur du multiensemble encodé est asymptotiquement à un facteur 5/3 de la borne inférieure.

Télécharger le manuscrit.
Télécharger le support de présentation.

Bibtex
@inproceedings{GriRabSkaGro20129,
  author = {Vincent Gripon and Michael Rabbat and
Vitaly Skachek and Warren J. Gross},
  title = {Compressing multisets using tries},
  booktitle = {Proceedings of Information Theory
Workshop},
  year = {2012},
  address = {Lausanne, Switzerland},
  month = {September},
  pages = {647--651},
}

Random clique codes

V. Gripon, V. Skachek, W. J. Gross et M. Rabbat, "Random clique codes," dans Proceedings of 7" International Symposium on Turbo Codes and Iterative Information Processing, Gothenburg, Suède, pp. 121--125, août 2012.

Une nouvelle famille de mémoires associatives basées sur des réseaux de neurones a récemment été introduite. Ces mémoires atteignent des performances excellentes grâce à l'utilisation de principes des codes correcteur d'erreurs. Partant de ces mémoires, nous introduisons une nouvelle famille de codes appelés "codes à cliques". Ces codes s'appuient sur les cliques dans des graphes c-parti équilibrés correspondant à des mémoires associatives. En particulier, nous étudions un ensemble de code à cliques aléatoires, et nous prouvons que ces ensembles contiennent asymptotiquement des bons codes. De plus, ces codes peuvent être décodés avec efficacité en utilisant les mémoires associatives (faible complexité et occupation mémoire). Ils offrent une alternative intéressante aux codes déjà existants, en particulier lorsque le canal considéré est à effacement et sans mémoire.

Télécharger le manuscrit.
Télécharger le support de présentation.

Bibtex
@inproceedings{GriSkaGroRab20128,
  author = {Vincent Gripon and Vitaly Skachek and
Warren J. Gross and Michael Rabbat},
  title = {Random clique codes},
  booktitle = {Proceedings of 7" International
Symposium on Turbo Codes and Iterative Information
Processing},
  year = {2012},
  address = {Gothenburg, Sweden},
  month = {August},
  pages = {121--125},
}

Learning long sequences in binary neural networks

X. Jiang, V. Gripon et C. Berrou, "Learning long sequences in binary neural networks," dans Proceedings of Cognitive 2012, Nice, France, pp. 165--170, juillet 2012.

Une architecture originale de réseaux de neurones parcimonieux et orientés qui permet l'introduction de séquentialité dans les mémoires associatives est proposée dans ce manuscrit. Cette architecture peut être considérée comme une généralisation d'une famille de réseaux de neurones récemment introduite, dont le fonctionnement se base sur la présence de cliques dans les graphes correspondants. Cette nouvelle architecture est capable d'apprendre puis de retrouver des séquences très longues à partir de la connaissance d'une partie consécutive de ses symboles.

Télécharger le manuscrit.

Bibtex
@inproceedings{JiaGriBer20127,
  author = {Xiaoran Jiang and Vincent Gripon and
Claude Berrou},
  title = {Learning long sequences in binary neural
networks},
  booktitle = {Proceedings of Cognitive 2012},
  year = {2012},
  address = {Nice, France},
  pages = {165--170},
  month = {July},
}

Architecture and Implementation of an Associative Memory Using Sparse Clustered Networks

H. Jarollahi, N. Onizawa, V. Gripon et W. J. Gross, "Architecture and Implementation of an Associative Memory Using Sparse Clustered Networks," dans Proceedings of IEEE International Symposium on Circuits and Systems, pp. 2901-2904, mai 2012.

Les mémoires associatives sont une alternative aux mémoires indexées dont l'implémentation matérielle est bénéfique à certaines applications, par exemple la fouille de données. Les méthodes classiques à base de réseaux de neurones sont inadaptées puisque la quantité de bits stockés par bit utilisé (efficacité) tend vers zéro lorsque la taille de la mémoire tend vers l'infini. De plus, la taille des messages appris est égale au nombre de noeuds dans le réseau, limitant considérablement le nombre de messages (diversité) qu'une telle mémoire peut apprendre. Récemment, un nouvel algorithme se basant sur des réseaux de neurones parcimonieux atteignant une efficacité quasi optimale et une grande diversité ont été introduits. Dans ce papier, une implémentation matérielle témoin de ces réseaux est présentée. Les limitations et ouvertures sont également analysées.

Télécharger le manuscrit.
Télécharger le support de présentation.

Bibtex
@inproceedings{JarOniGriGro20125,
  author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Warren J. Gross},
  title = {Architecture and Implementation of an
Associative Memory Using Sparse Clustered Networks},
  booktitle = {Proceedings of IEEE International
Symposium on Circuits and Systems},
  month = {May},
  year = {2012},
  pages = {2901-2904},
}

Nearly-optimal associative memories based on distributed constant weight codes

V. Gripon et C. Berrou, "Nearly-optimal associative memories based on distributed constant weight codes," dans Proceedings of Information Theory and Applications Workshop, San Diego, CA, USA, pp. 269--273, février 2012.

Une nouvelle famille de réseaux de neurones atteignant des performances quasi optimales a récemment été introduite. Dans ces réseaux, les messages sont stockés sous la forme de cliques dans des graphes multiparties. Dans ce manuscrit, nous interprétons ces réseaux sous le formalisme des codes correcteur d'erreurs. Pour cela, nous introduisons deux codes originaux, le code économe et le code sur cliques, qui sont tous les deux des sous-familles des codes de poids constant. Nous munissons également le réseau d'une règle de décodage améliorée qui donne au réseau des propriétés d'exactitude des réponses et qui augmente sensiblement les performances.

Merci à Pascal Vontobel pour m'avoir fait remarqué plusieurs erreurs (cette version est à jour).

Télécharger le manuscrit.
Télécharger le support de présentation.

Bibtex
@inproceedings{GriBer20122,
  author = {Vincent Gripon and Claude Berrou},
  title = {Nearly-optimal associative memories based
on distributed constant weight codes},
  booktitle = {Proceedings of Information Theory and
Applications Workshop},
  year = {2012},
  address = {San Diego, CA, USA},
  pages = {269--273},
  month = {February},
}

A simple and efficient way to store many messages using neural cliques

V. Gripon et C. Berrou, "A simple and efficient way to store many messages using neural cliques," dans Proceedings of IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain, Paris, France, pp. 54--58, avril 2011.

Les mémoires associatives sont des dispositifs capables d’apprendre puis de se remémorer des messages malgré des effacements ou des erreurs. Leur mécanique est similaire à celle des décodeurs de codes correcteur d’erreurs. D’un autre coté, la corrélation a un rôle dual dans chacun de ces dispositifs, utilisée comme pierre angulaire de la remémoration dans les mémoires associatives et évitée dans les décodeurs. Dans ce papier sont introduits des codes originaux qui permettent un rapprochement des deux dynamiques. L’idée de base est d’associer à chaque message à apprendre une clique dans un réseau de neurone binaire. Les performances obtenues explosent celles de l’état de l’art, en particulier celles des réseaux de Hopfield. De plus, le modèle proposé est biologiquement bien plus plausible : Il utilise des connexions binaires creuses entre populations de neurones, lesquels n’utilisent que deux opérations ; la sommation et la sélection du maximum.

Télécharger le manuscrit.
Télécharger le support de présentation.

Bibtex
@inproceedings{GriBer20114,
  author = {Vincent Gripon and Claude Berrou},
  title = {A simple and efficient way to store many
messages using neural cliques},
  booktitle = {Proceedings of IEEE Symposium on
Computational Intelligence, Cognitive Algorithms,
Mind, and Brain},
  year = {2011},
  pages = {54--58},
  address = {Paris, France},
  month = {April},
}

Coded Hopfield networks

C. Berrou et V. Gripon, "Coded Hopfield networks," dans Proceedings of 6" International Symposium on Turbo Codes and Iterative Information Processing, Brest, France, pp. 1--5, septembre 2010.

Du codage correcteur d’erreurs est introduit dans des mémoires associatives du type réseaux de Hopfield afin d’augmenter la diversité d’apprentissage ainsi que la robustesse en cas d’erreurs ou d’effacements. Pour y parvenir, le graphe associé au réseau de Hopfield classique est transformé en un graphe biparti dont l’information entrante est projetée sur des codes orthogonaux ou quasi-orthogonaux. Bien que l’apprentissage soit similaire à celui des réseaux de Hopfield classiques (règle de Hebb), le processus de remémoration s’appuie quant à lui sur le décodage correcteur d’erreurs qui offre de fortes propriétés de discrimination entre les messages appris.

Télécharger le manuscrit.

Bibtex
@inproceedings{BerGri20109,
  author = {Claude Berrou and Vincent Gripon},
  title = {Coded Hopfield networks},
  booktitle = {Proceedings of 6" International
Symposium on Turbo Codes and Iterative Information
Processing},
  year = {2010},
  pages = {1--5},
  address = {Brest, France},
  month = {September},
}

Qualitative Concurrent Stochastic Games with Imperfect Information

V. Gripon et O. Serre, "Qualitative Concurrent Stochastic Games with Imperfect Information," dans Proceedings of 36th International Colloquium of Automata, Languages and Programming, Springer, Lecture Notes in Computer Science, Rhodes, Grèce, pp. 200--211, juillet 2009.

Nous étudions un modèle de jeux qui combine les aspects concurent, information imparfaite et stochastique. Ce sont des jeux à espace fini d’états dans lesquels, à chaque tour, les deux joueurs choisissent simultanément et indépendamment une action. Ensuite, un état successeur est choisi à l’aide d’une distribution probabiliste dépendant de l’état précédent et de la paire d’actions choisies par les joueurs. L’information imparfaite est modélisée comme suit : Chacun des deux joueurs possède une relation d’équivalence sur les états et ne peut observer que la classe d’équivalence de l’état courant. Ainsi un joueur devra se comporter de la même façon dans deux parties qui lui paraissent indistinguables. Nous considérons l’accessibilité (la partie passe-t-elle par un état final ?) et l’objectif de Buchi (la partie passe-t-elle infiniment souvent par un état final ?). Notre contribution principale est de montrer que le problème de décider si le premier joueur a une stratégie qui lui assure de gagner presque sûrement contre toute stratégie du second joueur est complet en 2-ExpTime. Nous caractérisons également ces stratégies qui lui permettent de gagner presque sûrement.

Télécharger le manuscrit.

Bibtex
@inproceedings{GriSer20097,
  author = {Vincent Gripon and Olivier Serre},
  title = {Qualitative Concurrent Stochastic Games
with Imperfect Information},
  booktitle = {Proceedings of 36th International
Colloquium of Automata, Languages and Programming},
  year = {2009},
  editor = {Springer},
  pages = {200--211},
  series = {Lecture Notes in Computer Science},
  address = {Rhodes, Greece},
  month = {July},
}




Vous êtes le 505904ème visiteur

Site de Vincent Gripon