Vincent Gripon's Homepage

Research and Teaching Blog

Journal Papers

2017

A. Iscen, T. Furon, V. Gripon, M. Rabbat and H. Jégou, "Memory vectors for similarity search in high-dimensional spaces," in IEEE Transactions on Big Data, pp. 1--13, 2017.

2016

B. Pasdeloup, V. Gripon, G. Mercier, D. Pastor and M. Rabbat, "Characterization and inference of weighted graph topologies from observations of diffused signals," in IEEE Transactions on Signal and Information Processing over Networks, 2016. Submitted to.

V. Gripon, J. Heusel, M. Löwe and F. Vermet, "A Comparative Study of Sparse Associative Memories," in Journal of Statistical Physics, Volume 164, pp. 105--129, 2016. Manuscript.

B. Boguslawski, V. Gripon, F. Seguin and F. Heitzmann, "Twin Neurons for Efficient Real-World Data Distribution in Networks of Neural Cliques. Applications in Power Management in Electronic circuits," in IEEE Transactions on Neural Networks and Learning Systems, Volume 27, Number 2, pp. 375--387, 2016. Manuscript.

X. Jiang, V. Gripon, C. Berrou and M. Rabbat, "Storing sequences in binary tournament-based neural networks," in IEEE Transactions on Neural Networks and Learning Systems, Volume 27, Number 5, pp. 913--925, 2016. Manuscript.

H. Jarollahi, V. Gripon, N. Onizawa and W. J. Gross, "Algorithm and Architecture for a Low-Power Content-Addressable Memory Based on Sparse-Clustered Networks," in Transactions on Very Large Scale Integration Systems, Volume 27, Number 2, pp. 375--387, 2016. Manuscript.

A. Aboudib, V. Gripon and G. Coppin, "A Neural Network Model for Solving the Feature Correspondence Problem," in Lecture Notes in Computer Science, Volume 9887, pp. 439--446, September 2016. Manuscript.

G. Soulié, V. Gripon and M. Robert, "Compression of Deep Neural Networks on the Fly," in Lecture Notes in Computer Science, Volume 9887, pp. 153--170, September 2016. Manuscript.

A. Aboudib, V. Gripon and G. Coppin, "A Biologically Inspired Framework for Visual Information Processing and an Application on Modeling Bottom-Up Visual Attention," in Cognitive Computation, pp. 1--20, September 2016. Manuscript.

2015

F. Leduc-Primeau, V. Gripon, M. Rabbat and W. J. Gross, "Fault-Tolerant Associative Memories Based on c-Partite Graphs," in IEEE Transactions on Signal Processing, Volume 64, Number 4, pp. 829--841, 2015. Manuscript.

2014

H. Jarollahi, N. Onizawa, V. Gripon, N. Sakimura, T. Sugibayashi, T. Endoh, H. Ohno, T. Hanyu and W. J. Gross, "A Non-Volatile Associative Memory-Based Context-Driven Search Engine Using 90 nm CMOS MTJ-Hybrid Logic-in-Memory Architecture," in Journal on Emerging and Selected Topics in Circuits and Systems, Volume 4, pp. 460--474, 2014. Manuscript.

H. Jarollahi, N. Onizawa, V. Gripon and W. J. Gross, "Algorithm and Architecture of Fully-Parallel Associative Memories Based on Sparse Clustered Networks," in Journal of Signal Processing Systems, pp. 1--13, 2014. Manuscript.

B. K. Aliabadi, C. Berrou, V. Gripon and X. Jiang, "Storing sparse messages in networks of neural cliques," in IEEE Transactions on Neural Networks and Learning Systems, Volume 25, pp. 980--989, 2014. Manuscript.

2011

V. Gripon and C. Berrou, "Sparse neural networks with large learning diversity," in IEEE Transactions on Neural Networks, Volume 22, Number 7, pp. 1087--1096, July 2011. Manuscript.

Memory vectors for similarity search in high-dimensional spaces

A. Iscen, T. Furon, V. Gripon, M. Rabbat and H. Jégou, "Memory vectors for similarity search in high-dimensional spaces," in IEEE Transactions on Big Data, pp. 1--13, 2017.

We study an indexing architecture to store and search in a database of high-dimensional vectors from the perspective of statistical signal processing and decision theory. This architecture is composed of several memory units, each of which summarizes a fraction of the database by a single representative vector. The potential similarity of the query to one of the vectors stored in the memory unit is gauged by a simple correlation with the memory unit’s representative vector. This representative optimizes the test of the following hypothesis: the query is independent from any vector in the memory unit vs. the query is a simple perturbation of one of the stored vectors. Compared to exhaustive search, our approach finds the most similar database vectors significantly faster without a noticeable reduction in search quality. Interestingly, the reduction of complexity is provably better in high-dimensional spaces. We empirically demonstrate its practical interest in a large-scale image search scenario with off-the-shelf state-of-the-art descriptors.


Bibtex
@article{IscFurGriRabJé2017,
  author = {Ahmet Iscen and Teddy Furon and Vincent
Gripon and Michael Rabbat and Hervé Jégou},
  title = {Memory vectors for similarity search in
high-dimensional spaces},
  journal = {IEEE Transactions on Big Data},
  year = {2017},
  pages = {1--13},
}

Characterization and inference of weighted graph topologies from observations of diffused signals

B. Pasdeloup, V. Gripon, G. Mercier, D. Pastor and M. Rabbat, "Characterization and inference of weighted graph topologies from observations of diffused signals," in IEEE Transactions on Signal and Information Processing over Networks, 2016. Submitted to.


Bibtex
@article{PasGriMerPasRab2016,
  author = {Bastien Pasdeloup and Vincent Gripon and
Grégoire Mercier and Dominique Pastor and Michael
Rabbat},
  title = {Characterization and inference of weighted
graph topologies from observations of diffused
signals},
  journal = {IEEE Transactions on Signal and
Information Processing over Networks},
  year = {2016},
  note = {Submitted to},
}

A Comparative Study of Sparse Associative Memories

V. Gripon, J. Heusel, M. Löwe and F. Vermet, "A Comparative Study of Sparse Associative Memories," in Journal of Statistical Physics, Volume 164, pp. 105--129, 2016.

We study various models of associative memories with sparse information, i.e. a pattern to be stored is a random string of 0s and 1s with about logN 1s, only. We compare different synaptic weights, architectures and retrieval mechanisms to shed light on the influence of the various parameters on the storage capacity.

Download manuscript.

Bibtex
@article{GriHeuLöVer2016,
  author = {Vincent Gripon and Judith Heusel and
Matthias Löwe and Franck Vermet},
  title = {A Comparative Study of Sparse Associative
Memories},
  journal = {Journal of Statistical Physics},
  year = {2016},
  volume = {164},
  pages = {105--129},
}

Twin Neurons for Efficient Real-World Data Distribution in Networks of Neural Cliques. Applications in Power Management in Electronic circuits

B. Boguslawski, V. Gripon, F. Seguin and F. Heitzmann, "Twin Neurons for Efficient Real-World Data Distribution in Networks of Neural Cliques. Applications in Power Management in Electronic circuits," in IEEE Transactions on Neural Networks and Learning Systems, Volume 27, Number 2, pp. 375--387, 2016.

Associative memories are datastructures that allow retrieval of previously stored messages given part of their content. They thus behave similarly to human brain’s memory that is capable, for instance, of retrieving the end of a song given its beginning. Among different families of associative memories, sparse ones are known to provide the best efficiency (ratio of the number of bits stored to that of bits used). Recently, a new family of sparse associative memories achieving almost-optimal efficiency has been proposed. Their structure induces a direct mapping between input messages and stored patterns. Nevertheless, it is well known that non-uniformity of the stored messages can lead to dramatic decrease in performance. In this work, we show the impact of non-uniformity on the performance of this recent model and we exploit the structure of the model to improve its performance in practical applications where data is not necessarily uniform. In order to approach the performance of networks with uniformly distributed messages presented in theoretical studies, twin neurons are introduced. To assess the adapted model, twin neurons are used with real-world data to optimize power consumption of electronic circuits in practical test-cases.

Download manuscript.

Bibtex
@article{BogGriSegHei2016,
  author = {Bartosz Boguslawski and Vincent Gripon and
Fabrice Seguin and Frédéric Heitzmann},
  title = {Twin Neurons for Efficient Real-World Data
Distribution in Networks of Neural Cliques.
Applications in Power Management in Electronic
circuits},
  journal = {IEEE Transactions on Neural Networks and
Learning Systems},
  year = {2016},
  volume = {27},
  number = {2},
  pages = {375--387},
}

Storing sequences in binary tournament-based neural networks

X. Jiang, V. Gripon, C. Berrou and M. Rabbat, "Storing sequences in binary tournament-based neural networks," in IEEE Transactions on Neural Networks and Learning Systems, Volume 27, Number 5, pp. 913--925, 2016.

An extension to a recently introduced architecture of clique-based neural networks is presented. This extension makes it possible to store sequences with high eff i ciency. To obtain this property, network connections are provided with orientation and with f l exible redundancy carried by both spatial and temporal redundancy, a mechanism of anticipation being introduced in the model. In addition to the sequence storage with high efficiency, this new scheme also offers biological plausibility. In order to achieve accurate sequence retrieval, a double layered structure combining hetero-association and auto-association is also proposed.

Download manuscript.

Bibtex
@article{JiaGriBerRab2016,
  author = {Xiaoran Jiang and Vincent Gripon and
Claude Berrou and Michael Rabbat},
  title = {Storing sequences in binary
tournament-based neural networks},
  journal = {IEEE Transactions on Neural Networks and
Learning Systems},
  year = {2016},
  volume = {27},
  number = {5},
  pages = {913--925},
}

Algorithm and Architecture for a Low-Power Content-Addressable Memory Based on Sparse-Clustered Networks

H. Jarollahi, V. Gripon, N. Onizawa and W. J. Gross, "Algorithm and Architecture for a Low-Power Content-Addressable Memory Based on Sparse-Clustered Networks," in Transactions on Very Large Scale Integration Systems, Volume 27, Number 2, pp. 375--387, 2016.

We propose a low-power content-addressable memory (CAM) employing a new algorithm for associativity between the input tag and the corresponding address of the output data. The proposed architecture is based on a recently developed sparse clustered network using binary connections that on-average eliminates most of the parallel comparisons per- formed during a search. Therefore, the dynamic energy con- sumption of the proposed design is significantly lower compared with that of a conventional low-power CAM design. Given an input tag, the proposed architecture computes a few possibilities for the location of the matched tag and performs the comparisons on them to locate a single valid match. TSMC 65-nm CMOS tech- nology was used for simulation purposes. Following a selection of design parameters, such as the number of CAM entries, the energy consumption and the search delay of the proposed design are 8%, and 26% of that of the conventional NAND architecture, respectively, with a 10% area overhead. A design methodology based on the silicon area and power budgets, and performance requirements is discussed.

Download manuscript.

Bibtex
@article{JarGriOniGro2016,
  author = {Hooman Jarollahi and Vincent Gripon and
Naoya Onizawa and Warren J. Gross},
  title = {Algorithm and Architecture for a Low-Power
Content-Addressable Memory Based on Sparse-Clustered
Networks},
  journal = {Transactions on Very Large Scale
Integration Systems},
  year = {2016},
  volume = {27},
  number = {2},
  pages = {375--387},
}

A Neural Network Model for Solving the Feature Correspondence Problem

A. Aboudib, V. Gripon and G. Coppin, "A Neural Network Model for Solving the Feature Correspondence Problem," in Lecture Notes in Computer Science, Volume 9887, pp. 439--446, September 2016.

Finding correspondences between image features is a fundamental question in computer vision. Many models in literature have proposed to view this as a graph matching problem whose solution can be approximated using optimization principles. In this paper, we propose a different treatment of this problem from a neural network perspective. We present a new model for matching features inspired by the architecture of a recently introduced neural network. We show that by using popular neural network principles like max-pooling, k-winners-take-all and iterative processing, we obtain a better accuracy at matching features in cluttered environments. The proposed solution is accompanied by an experimental evaluation and is compared to state-of-the-art models.

Download manuscript.

Bibtex
@article{AboGriCop20169,
  author = {Ala Aboudib and Vincent Gripon and Gilles
Coppin},
  title = {A Neural Network Model for Solving the
Feature Correspondence Problem},
  journal = {Lecture Notes in Computer Science},
  year = {2016},
  volume = {9887},
  pages = {439--446},
  month = {September},
}

Compression of Deep Neural Networks on the Fly

G. Soulié, V. Gripon and M. Robert, "Compression of Deep Neural Networks on the Fly," in Lecture Notes in Computer Science, Volume 9887, pp. 153--170, September 2016.

Thanks to their state-of-the-art performance, deep neural networks are increasingly used for object recognition. To achieve the best results, they use millions of parameters to be trained. However, when targetting embedded applications the size of these models becomes problematic. As a consequence, their usage on smartphones or other resource limited devices is prohibited. In this paper we introduce a novel compression method for deep neural networks that is performed during the learning phase. It consists in adding an extra regularization term to the cost function of fully-connected layers. We combine this method with Product Quantization (PQ) of the trained weights for higher savings in storage consumption. We evaluate our method on two data sets (MNIST and CIFAR10), on which we achieve significantly larger compression rates than state-of-the-art methods.

Download manuscript.

Bibtex
@article{SouGriRob20169,
  author = {Guillaume Soulié and Vincent Gripon and
Maëlys Robert},
  title = {Compression of Deep Neural Networks on the
Fly},
  journal = {Lecture Notes in Computer Science},
  year = {2016},
  volume = {9887},
  pages = {153--170},
  month = {September},
}

A Biologically Inspired Framework for Visual Information Processing and an Application on Modeling Bottom-Up Visual Attention

A. Aboudib, V. Gripon and G. Coppin, "A Biologically Inspired Framework for Visual Information Processing and an Application on Modeling Bottom-Up Visual Attention," in Cognitive Computation, pp. 1--20, September 2016.

An emerging trend in visual information processing is toward incorporating some interesting properties of the ventral stream in order to account for some limitations of machine learning algorithms. Selective attention and cortical magnification are two such important phenomena that have been the subject of a large body of research in recent years. In this paper, we focus on designing a new model for visual acquisition that takes these important properties into account.

Download manuscript.

Bibtex
@article{AboGriCop20169,
  author = {Ala Aboudib and Vincent Gripon and Gilles
Coppin},
  title = {A Biologically Inspired Framework for
Visual Information Processing and an Application on
Modeling Bottom-Up Visual Attention},
  journal = {Cognitive Computation},
  year = {2016},
  pages = {1--20},
  month = {September},
}

Fault-Tolerant Associative Memories Based on c-Partite Graphs

F. Leduc-Primeau, V. Gripon, M. Rabbat and W. J. Gross, "Fault-Tolerant Associative Memories Based on c-Partite Graphs," in IEEE Transactions on Signal Processing, Volume 64, Number 4, pp. 829--841, 2015.

Associative memories allow the retrieval of previously stored messages given a part of their content. In this paper, we are interested in associative memories based on-partite graphs that were recently introduced. These memories are almost optimal in terms of the amount of storage they require (efficiency) and allow retrieving messages with low complexity. We propose a generic im- plementation model for the retrieval algorithm that can be readily mapped to an integrated circuit and study the retrieval performance when hardware components are affected by faults. We show using analytical and simulation results that these associative memories can be made resilient to circuit faults with a minor modification of the retrieval algorithm. In one example, the memory retains 88% of its efficiency when 1% of the storage cells are faulty, or 98% when 0.1% of the binary outputs of the retrieval algorithm are faulty. When considering storage faults, the fault tolerance exhibited by the proposed associative memory can be comparable tousing a capacity-achieving error correction code for protecting the stored information.

Download manuscript.

Bibtex
@article{LedGriRabGro2015,
  author = {François Leduc-Primeau and Vincent Gripon
and Michael Rabbat and Warren J. Gross},
  title = {Fault-Tolerant Associative Memories Based
on c-Partite Graphs},
  journal = {IEEE Transactions on Signal Processing},
  year = {2015},
  volume = {64},
  number = {4},
  pages = {829--841},
}

A Non-Volatile Associative Memory-Based Context-Driven Search Engine Using 90 nm CMOS MTJ-Hybrid Logic-in-Memory Architecture

H. Jarollahi, N. Onizawa, V. Gripon, N. Sakimura, T. Sugibayashi, T. Endoh, H. Ohno, T. Hanyu and W. J. Gross, "A Non-Volatile Associative Memory-Based Context-Driven Search Engine Using 90 nm CMOS MTJ-Hybrid Logic-in-Memory Architecture," in Journal on Emerging and Selected Topics in Circuits and Systems, Volume 4, pp. 460--474, 2014.

This paper presents algorithm, architecture, and fabrication results of a nonvolatile context-driven search engine that reduces energy consumption as well as computational delay compared to classical hardware and software-based approaches. The proposed architecture stores only associations between items from multiple search fields in the form of binary links, and merges repeated field items to reduce the memory requirements and ac- cesses. The fabricated chip achievesmemory reduction and 89% energy saving compared to a classical field-based approach in hardware, based on content-addressable memory (CAM). Furthermore, it achievesreduced number of clock cycles in performing search operations compared to the CAM, and five or- ders of magnitude reduced number of clock cycles compared to a fabricated and measured ultra low-power CPU-based counterpart running a classical search algorithm in software. The energy con- sumption of the proposed architecture is on average three orders of magnitude smaller than that of a software-based approach. A magnetic tunnel junction (MTJ)-based logic-in-memory architec- ture is presented that allows simple routing and eliminates leakage current in standby using 90 nm CMOS/MTJ-hybrid technologies.

Download manuscript.

Bibtex
@article{JarOniGriSakSugEndOhnHanGro2014,
  author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Noboru Sakimura and Tadahiko
Sugibayashi and Tetsuo Endoh and Hideo Ohno and
Takahiro Hanyu and Warren J. Gross},
  title = {A Non-Volatile Associative Memory-Based
Context-Driven Search Engine Using 90 nm CMOS
MTJ-Hybrid Logic-in-Memory Architecture},
  journal = {Journal on Emerging and Selected Topics
in Circuits and Systems},
  year = {2014},
  volume = {4},
  pages = {460--474},
}

Algorithm and Architecture of Fully-Parallel Associative Memories Based on Sparse Clustered Networks

H. Jarollahi, N. Onizawa, V. Gripon and W. J. Gross, "Algorithm and Architecture of Fully-Parallel Associative Memories Based on Sparse Clustered Networks," in Journal of Signal Processing Systems, pp. 1--13, 2014.

Associative memories retrieve stored information given partial or erroneous input patterns. A new family of associative memories based on Sparse Clustered Networks (SCNs) has been recently introduced that can store many more messages than classical Hopfield-Neural Networks (HNNs). In this paper, we propose fully-parallel hardware architectures of such memories for partial or erroneous inputs. The proposed architectures eliminate winner-take-all modules and thus reduce the hardware complexity by consuming 65% fewer FPGA lookup tables and increase the operating frequency by approximately 1.9 times compared to that of previous work. Furthermore, the scaling behaviour of the implemented architectures for various design choices are investigated. We explore the effect of varying design variables such as the number of clusters, network nodes, and erased symbols on the error performance and the hardware resources.

Download manuscript.

Bibtex
@article{JarOniGriGro2014,
  author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Warren J. Gross},
  title = {Algorithm and Architecture of
Fully-Parallel Associative Memories Based on Sparse
Clustered Networks},
  journal = {Journal of Signal Processing Systems},
  year = {2014},
  pages = {1--13},
}

Storing sparse messages in networks of neural cliques

B. K. Aliabadi, C. Berrou, V. Gripon and X. Jiang, "Storing sparse messages in networks of neural cliques," in IEEE Transactions on Neural Networks and Learning Systems, Volume 25, pp. 980--989, 2014.

An extension to a recently introduced binary neural network is proposed in order to allow the learning of sparse messages, in large numbers and with high memory efficiency. This new network is justified both in biological and informational terms. The learning and retrieval rules are detailed and illustrated by various simulation results.

Download manuscript.

Bibtex
@article{AliBerGriJia2014,
  author = {Behrooz Kamary Aliabadi and Claude Berrou
and Vincent Gripon and Xiaoran Jiang},
  title = {Storing sparse messages in networks of
neural cliques},
  journal = {IEEE Transactions on Neural Networks and
Learning Systems},
  year = {2014},
  volume = {25},
  pages = {980--989},
}

Sparse neural networks with large learning diversity

V. Gripon and C. Berrou, "Sparse neural networks with large learning diversity," in IEEE Transactions on Neural Networks, Volume 22, Number 7, pp. 1087--1096, July 2011.

Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages, much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures.

Download manuscript.

Bibtex
@article{GriBer20117,
  author = {Vincent Gripon and Claude Berrou},
  title = {Sparse neural networks with large learning
diversity},
  journal = {IEEE Transactions on Neural Networks},
  year = {2011},
  volume = {22},
  number = {7},
  pages = {1087--1096},
  month = {July},
}




You are the 495879th visitor

Vincent Gripon's Homepage