Site de Vincent Gripon

Blog sur mes recherches et mon enseignement

Articles en journal

2017

B. Pasdeloup, V. Gripon, G. Mercier, D. Pastor et M. Rabbat, "Characterization and Inference of Graph Diffusion Processes from Observations of Stationary Signals," dans IEEE Transactions on Signal and Information Processing over Networks, 2017. À paraître. Manuscrit.

A. Iscen, T. Furon, V. Gripon, M. Rabbat et H. Jégou, "Memory vectors for similarity search in high-dimensional spaces," dans IEEE Transactions on Big Data, pp. 1--13, 2017.

A. Mheich, M. Hassan, M. Khalil, V. Gripon, O. Dufor et F. Wendling, "SimiNet: a Novel Method for Quantifying Brain Network Similarity," dans IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017. À paraître.

2016

V. Gripon, J. Heusel, M. Löwe et F. Vermet, "A Comparative Study of Sparse Associative Memories," dans Journal of Statistical Physics, Volume 164, pp. 105--129, 2016. Manuscrit.

B. Boguslawski, V. Gripon, F. Seguin et F. Heitzmann, "Twin Neurons for Efficient Real-World Data Distribution in Networks of Neural Cliques. Applications in Power Management in Electronic circuits," dans IEEE Transactions on Neural Networks and Learning Systems, Volume 27, Number 2, pp. 375--387, 2016. Manuscrit.

X. Jiang, V. Gripon, C. Berrou et M. Rabbat, "Storing sequences in binary tournament-based neural networks," dans IEEE Transactions on Neural Networks and Learning Systems, Volume 27, Number 5, pp. 913--925, 2016. Manuscrit.

H. Jarollahi, V. Gripon, N. Onizawa et W. J. Gross, "Algorithm and Architecture for a Low-Power Content-Addressable Memory Based on Sparse-Clustered Networks," dans Transactions on Very Large Scale Integration Systems, Volume 27, Number 2, pp. 375--387, 2016. Manuscrit.

A. Aboudib, V. Gripon et G. Coppin, "A Neural Network Model for Solving the Feature Correspondence Problem," dans Lecture Notes in Computer Science, Volume 9887, pp. 439--446, septembre 2016. Manuscrit.

G. Soulié, V. Gripon et M. Robert, "Compression of Deep Neural Networks on the Fly," dans Lecture Notes in Computer Science, Volume 9887, pp. 153--170, septembre 2016. Manuscrit.

A. Aboudib, V. Gripon et G. Coppin, "A Biologically Inspired Framework for Visual Information Processing and an Application on Modeling Bottom-Up Visual Attention," dans Cognitive Computation, pp. 1--20, septembre 2016. Manuscrit.

2015

F. Leduc-Primeau, V. Gripon, M. Rabbat et W. J. Gross, "Fault-Tolerant Associative Memories Based on c-Partite Graphs," dans IEEE Transactions on Signal Processing, Volume 64, Number 4, pp. 829--841, 2015. Manuscrit.

2014

H. Jarollahi, N. Onizawa, V. Gripon, N. Sakimura, T. Sugibayashi, T. Endoh, H. Ohno, T. Hanyu et W. J. Gross, "A Non-Volatile Associative Memory-Based Context-Driven Search Engine Using 90 nm CMOS MTJ-Hybrid Logic-in-Memory Architecture," dans Journal on Emerging and Selected Topics in Circuits and Systems, Volume 4, pp. 460--474, 2014. Manuscrit.

H. Jarollahi, N. Onizawa, V. Gripon et W. J. Gross, "Algorithm and Architecture of Fully-Parallel Associative Memories Based on Sparse Clustered Networks," dans Journal of Signal Processing Systems, pp. 1--13, 2014. Manuscrit.

B. K. Aliabadi, C. Berrou, V. Gripon et X. Jiang, "Storing sparse messages in networks of neural cliques," dans IEEE Transactions on Neural Networks and Learning Systems, Volume 25, pp. 980--989, 2014. Manuscrit.

2011

V. Gripon et C. Berrou, "Sparse neural networks with large learning diversity," dans IEEE Transactions on Neural Networks, Volume 22, Number 7, pp. 1087--1096, juillet 2011. Manuscrit.

Characterization and Inference of Graph Diffusion Processes from Observations of Stationary Signals

B. Pasdeloup, V. Gripon, G. Mercier, D. Pastor et M. Rabbat, "Characterization and Inference of Graph Diffusion Processes from Observations of Stationary Signals," dans IEEE Transactions on Signal and Information Processing over Networks, 2017. À paraître.

Many tools from the field of graph signal processing exploit knowledge of the underlying graph’s structure (e.g., as encoded in the Laplacian matrix) to process signals on the graph. Therefore, in the case when no graph is available, graph signal processing tools cannot be used anymore. Researchers have proposed approaches to infer a graph topology from observations of signals on its vertices. Since the problem is ill-posed, these approaches make assumptions, such as smoothness of the signals on the graph, or sparsity priors. In this paper, we propose a characterization of the space of valid graphs, in the sense that they can explain stationary signals. To simplify the exposition in this paper, we focus here on the case where signals were i.i.d. at some point back in time and were observed after diffusion on a graph. We show that the set of graphs verifying this assumption has a strong connection with the eigenvectors of the covariance matrix, and forms a convex set. Along with a theoretical study in which these eigenvectors are assumed to be known, we consider the practical case when the observations are noisy, and experimentally observe how fast the set of valid graphs converges to the set obtained when the exact eigenvectors are known, as the number of observations grows. To illustrate how this characterization can be used for graph recovery, we present two methods for selecting a particular point in this set under chosen criteria, namely graph simplicity and sparsity. Additionally, we introduce a measure to evaluate how much a graph is adapted to signals under a stationarity assumption. Finally, we evaluate how state-of-the-art methods relate to this framework through experiments on a dataset of temperatures.

Télécharger le manuscrit.

Bibtex
@article{PasGriMerPasRab2017,
  author = {Bastien Pasdeloup and Vincent Gripon and
Grégoire Mercier and Dominique Pastor and Michael
Rabbat},
  title = {Characterization and Inference of Graph
Diffusion Processes from Observations of Stationary
Signals},
  journal = {IEEE Transactions on Signal and
Information Processing over Networks},
  year = {2017},
  note = {To appear},
}

Memory vectors for similarity search in high-dimensional spaces

A. Iscen, T. Furon, V. Gripon, M. Rabbat et H. Jégou, "Memory vectors for similarity search in high-dimensional spaces," dans IEEE Transactions on Big Data, pp. 1--13, 2017.

We study an indexing architecture to store and search in a database of high-dimensional vectors from the perspective of statistical signal processing and decision theory. This architecture is composed of several memory units, each of which summarizes a fraction of the database by a single representative vector. The potential similarity of the query to one of the vectors stored in the memory unit is gauged by a simple correlation with the memory unit’s representative vector. This representative optimizes the test of the following hypothesis: the query is independent from any vector in the memory unit vs. the query is a simple perturbation of one of the stored vectors. Compared to exhaustive search, our approach finds the most similar database vectors significantly faster without a noticeable reduction in search quality. Interestingly, the reduction of complexity is provably better in high-dimensional spaces. We empirically demonstrate its practical interest in a large-scale image search scenario with off-the-shelf state-of-the-art descriptors.


Bibtex
@article{IscFurGriRabJé2017,
  author = {Ahmet Iscen and Teddy Furon and Vincent
Gripon and Michael Rabbat and Hervé Jégou},
  title = {Memory vectors for similarity search in
high-dimensional spaces},
  journal = {IEEE Transactions on Big Data},
  year = {2017},
  pages = {1--13},
}

SimiNet: a Novel Method for Quantifying Brain Network Similarity

A. Mheich, M. Hassan, M. Khalil, V. Gripon, O. Dufor et F. Wendling, "SimiNet: a Novel Method for Quantifying Brain Network Similarity," dans IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017. À paraître.

Quantifying the similarity between two networks is critical in many applications. A number of algorithms have been proposed to compute graph similarity, mainly based on the properties of nodes and edges. Interestingly, most of these algorithms ignore the physical location of the nodes, which is a key factor in the context of brain networks involving spatially defined functional areas. In this paper, we present a novel algorithm called "SimiNet" for measuring similarity between two graphs whose nodes are defined a priori within a 3D coordinate system. SimiNet provides a quantified index (ranging from 0 to 1) that accounts for node, edge and spatiality features. Complex graphs were simulated to evaluate the performance of SimiNet that is compared with eight state-of-art methods. Results show that SimiNet is able to detect weak spatial variations in compared graphs in addition to computing similarity using both nodes and edges. SimiNet was also applied to real brain networks obtained during a visual recognition task. The algorithm shows high performance to detect spatial variation of brain networks obtained during a naming task of two categories of visual stimuli: animals and tools. A perspective to this work is a better understanding of object categorization in the human brain.


Bibtex
@article{MheHasKhaGriDufWen2017,
  author = {Ahmad Mheich and Mahmoud Hassan and
Mohamad Khalil and Vincent Gripon and Olivier Dufor
and Fabrice Wendling},
  title = {SimiNet: a Novel Method for Quantifying
Brain Network Similarity},
  journal = {IEEE Transactions on Pattern Analysis and
Machine Intelligence},
  year = {2017},
  note = {In press},
}

A Comparative Study of Sparse Associative Memories

V. Gripon, J. Heusel, M. Löwe et F. Vermet, "A Comparative Study of Sparse Associative Memories," dans Journal of Statistical Physics, Volume 164, pp. 105--129, 2016.

We study various models of associative memories with sparse information, i.e. a pattern to be stored is a random string of 0s and 1s with about logN 1s, only. We compare different synaptic weights, architectures and retrieval mechanisms to shed light on the influence of the various parameters on the storage capacity.

Télécharger le manuscrit.

Bibtex
@article{GriHeuLöVer2016,
  author = {Vincent Gripon and Judith Heusel and
Matthias Löwe and Franck Vermet},
  title = {A Comparative Study of Sparse Associative
Memories},
  journal = {Journal of Statistical Physics},
  year = {2016},
  volume = {164},
  pages = {105--129},
}

Twin Neurons for Efficient Real-World Data Distribution in Networks of Neural Cliques. Applications in Power Management in Electronic circuits

B. Boguslawski, V. Gripon, F. Seguin et F. Heitzmann, "Twin Neurons for Efficient Real-World Data Distribution in Networks of Neural Cliques. Applications in Power Management in Electronic circuits," dans IEEE Transactions on Neural Networks and Learning Systems, Volume 27, Number 2, pp. 375--387, 2016.

Associative memories are datastructures that allow retrieval of previously stored messages given part of their content. They thus behave similarly to human brain’s memory that is capable, for instance, of retrieving the end of a song given its beginning. Among different families of associative memories, sparse ones are known to provide the best efficiency (ratio of the number of bits stored to that of bits used). Recently, a new family of sparse associative memories achieving almost-optimal efficiency has been proposed. Their structure induces a direct mapping between input messages and stored patterns. Nevertheless, it is well known that non-uniformity of the stored messages can lead to dramatic decrease in performance. In this work, we show the impact of non-uniformity on the performance of this recent model and we exploit the structure of the model to improve its performance in practical applications where data is not necessarily uniform. In order to approach the performance of networks with uniformly distributed messages presented in theoretical studies, twin neurons are introduced. To assess the adapted model, twin neurons are used with real-world data to optimize power consumption of electronic circuits in practical test-cases.

Télécharger le manuscrit.

Bibtex
@article{BogGriSegHei2016,
  author = {Bartosz Boguslawski and Vincent Gripon and
Fabrice Seguin and Frédéric Heitzmann},
  title = {Twin Neurons for Efficient Real-World Data
Distribution in Networks of Neural Cliques.
Applications in Power Management in Electronic
circuits},
  journal = {IEEE Transactions on Neural Networks and
Learning Systems},
  year = {2016},
  volume = {27},
  number = {2},
  pages = {375--387},
}

Storing sequences in binary tournament-based neural networks

X. Jiang, V. Gripon, C. Berrou et M. Rabbat, "Storing sequences in binary tournament-based neural networks," dans IEEE Transactions on Neural Networks and Learning Systems, Volume 27, Number 5, pp. 913--925, 2016.

An extension to a recently introduced architecture of clique-based neural networks is presented. This extension makes it possible to store sequences with high eff i ciency. To obtain this property, network connections are provided with orientation and with f l exible redundancy carried by both spatial and temporal redundancy, a mechanism of anticipation being introduced in the model. In addition to the sequence storage with high efficiency, this new scheme also offers biological plausibility. In order to achieve accurate sequence retrieval, a double layered structure combining hetero-association and auto-association is also proposed.

Télécharger le manuscrit.

Bibtex
@article{JiaGriBerRab2016,
  author = {Xiaoran Jiang and Vincent Gripon and
Claude Berrou and Michael Rabbat},
  title = {Storing sequences in binary
tournament-based neural networks},
  journal = {IEEE Transactions on Neural Networks and
Learning Systems},
  year = {2016},
  volume = {27},
  number = {5},
  pages = {913--925},
}

Algorithm and Architecture for a Low-Power Content-Addressable Memory Based on Sparse-Clustered Networks

H. Jarollahi, V. Gripon, N. Onizawa et W. J. Gross, "Algorithm and Architecture for a Low-Power Content-Addressable Memory Based on Sparse-Clustered Networks," dans Transactions on Very Large Scale Integration Systems, Volume 27, Number 2, pp. 375--387, 2016.

We propose a low-power content-addressable memory (CAM) employing a new algorithm for associativity between the input tag and the corresponding address of the output data. The proposed architecture is based on a recently developed sparse clustered network using binary connections that on-average eliminates most of the parallel comparisons per- formed during a search. Therefore, the dynamic energy con- sumption of the proposed design is significantly lower compared with that of a conventional low-power CAM design. Given an input tag, the proposed architecture computes a few possibilities for the location of the matched tag and performs the comparisons on them to locate a single valid match. TSMC 65-nm CMOS tech- nology was used for simulation purposes. Following a selection of design parameters, such as the number of CAM entries, the energy consumption and the search delay of the proposed design are 8%, and 26% of that of the conventional NAND architecture, respectively, with a 10% area overhead. A design methodology based on the silicon area and power budgets, and performance requirements is discussed.

Télécharger le manuscrit.

Bibtex
@article{JarGriOniGro2016,
  author = {Hooman Jarollahi and Vincent Gripon and
Naoya Onizawa and Warren J. Gross},
  title = {Algorithm and Architecture for a Low-Power
Content-Addressable Memory Based on Sparse-Clustered
Networks},
  journal = {Transactions on Very Large Scale
Integration Systems},
  year = {2016},
  volume = {27},
  number = {2},
  pages = {375--387},
}

A Neural Network Model for Solving the Feature Correspondence Problem

A. Aboudib, V. Gripon et G. Coppin, "A Neural Network Model for Solving the Feature Correspondence Problem," dans Lecture Notes in Computer Science, Volume 9887, pp. 439--446, septembre 2016.

Finding correspondences between image features is a fundamental question in computer vision. Many models in literature have proposed to view this as a graph matching problem whose solution can be approximated using optimization principles. In this paper, we propose a different treatment of this problem from a neural network perspective. We present a new model for matching features inspired by the architecture of a recently introduced neural network. We show that by using popular neural network principles like max-pooling, k-winners-take-all and iterative processing, we obtain a better accuracy at matching features in cluttered environments. The proposed solution is accompanied by an experimental evaluation and is compared to state-of-the-art models.

Télécharger le manuscrit.

Bibtex
@article{AboGriCop20169,
  author = {Ala Aboudib and Vincent Gripon and Gilles
Coppin},
  title = {A Neural Network Model for Solving the
Feature Correspondence Problem},
  journal = {Lecture Notes in Computer Science},
  year = {2016},
  volume = {9887},
  pages = {439--446},
  month = {September},
}

Compression of Deep Neural Networks on the Fly

G. Soulié, V. Gripon et M. Robert, "Compression of Deep Neural Networks on the Fly," dans Lecture Notes in Computer Science, Volume 9887, pp. 153--170, septembre 2016.

Thanks to their state-of-the-art performance, deep neural networks are increasingly used for object recognition. To achieve the best results, they use millions of parameters to be trained. However, when targetting embedded applications the size of these models becomes problematic. As a consequence, their usage on smartphones or other resource limited devices is prohibited. In this paper we introduce a novel compression method for deep neural networks that is performed during the learning phase. It consists in adding an extra regularization term to the cost function of fully-connected layers. We combine this method with Product Quantization (PQ) of the trained weights for higher savings in storage consumption. We evaluate our method on two data sets (MNIST and CIFAR10), on which we achieve significantly larger compression rates than state-of-the-art methods.

Télécharger le manuscrit.

Bibtex
@article{SouGriRob20169,
  author = {Guillaume Soulié and Vincent Gripon and
Maëlys Robert},
  title = {Compression of Deep Neural Networks on the
Fly},
  journal = {Lecture Notes in Computer Science},
  year = {2016},
  volume = {9887},
  pages = {153--170},
  month = {September},
}

A Biologically Inspired Framework for Visual Information Processing and an Application on Modeling Bottom-Up Visual Attention

A. Aboudib, V. Gripon et G. Coppin, "A Biologically Inspired Framework for Visual Information Processing and an Application on Modeling Bottom-Up Visual Attention," dans Cognitive Computation, pp. 1--20, septembre 2016.

An emerging trend in visual information processing is toward incorporating some interesting properties of the ventral stream in order to account for some limitations of machine learning algorithms. Selective attention and cortical magnification are two such important phenomena that have been the subject of a large body of research in recent years. In this paper, we focus on designing a new model for visual acquisition that takes these important properties into account.

Télécharger le manuscrit.

Bibtex
@article{AboGriCop20169,
  author = {Ala Aboudib and Vincent Gripon and Gilles
Coppin},
  title = {A Biologically Inspired Framework for
Visual Information Processing and an Application on
Modeling Bottom-Up Visual Attention},
  journal = {Cognitive Computation},
  year = {2016},
  pages = {1--20},
  month = {September},
}

Fault-Tolerant Associative Memories Based on c-Partite Graphs

F. Leduc-Primeau, V. Gripon, M. Rabbat et W. J. Gross, "Fault-Tolerant Associative Memories Based on c-Partite Graphs," dans IEEE Transactions on Signal Processing, Volume 64, Number 4, pp. 829--841, 2015.

Associative memories allow the retrieval of previously stored messages given a part of their content. In this paper, we are interested in associative memories based on-partite graphs that were recently introduced. These memories are almost optimal in terms of the amount of storage they require (efficiency) and allow retrieving messages with low complexity. We propose a generic im- plementation model for the retrieval algorithm that can be readily mapped to an integrated circuit and study the retrieval performance when hardware components are affected by faults. We show using analytical and simulation results that these associative memories can be made resilient to circuit faults with a minor modification of the retrieval algorithm. In one example, the memory retains 88% of its efficiency when 1% of the storage cells are faulty, or 98% when 0.1% of the binary outputs of the retrieval algorithm are faulty. When considering storage faults, the fault tolerance exhibited by the proposed associative memory can be comparable tousing a capacity-achieving error correction code for protecting the stored information.

Télécharger le manuscrit.

Bibtex
@article{LedGriRabGro2015,
  author = {François Leduc-Primeau and Vincent Gripon
and Michael Rabbat and Warren J. Gross},
  title = {Fault-Tolerant Associative Memories Based
on c-Partite Graphs},
  journal = {IEEE Transactions on Signal Processing},
  year = {2015},
  volume = {64},
  number = {4},
  pages = {829--841},
}

A Non-Volatile Associative Memory-Based Context-Driven Search Engine Using 90 nm CMOS MTJ-Hybrid Logic-in-Memory Architecture

H. Jarollahi, N. Onizawa, V. Gripon, N. Sakimura, T. Sugibayashi, T. Endoh, H. Ohno, T. Hanyu et W. J. Gross, "A Non-Volatile Associative Memory-Based Context-Driven Search Engine Using 90 nm CMOS MTJ-Hybrid Logic-in-Memory Architecture," dans Journal on Emerging and Selected Topics in Circuits and Systems, Volume 4, pp. 460--474, 2014.

This paper presents algorithm, architecture, and fabrication results of a nonvolatile context-driven search engine that reduces energy consumption as well as computational delay compared to classical hardware and software-based approaches. The proposed architecture stores only associations between items from multiple search fields in the form of binary links, and merges repeated field items to reduce the memory requirements and ac- cesses. The fabricated chip achievesmemory reduction and 89% energy saving compared to a classical field-based approach in hardware, based on content-addressable memory (CAM). Furthermore, it achievesreduced number of clock cycles in performing search operations compared to the CAM, and five or- ders of magnitude reduced number of clock cycles compared to a fabricated and measured ultra low-power CPU-based counterpart running a classical search algorithm in software. The energy con- sumption of the proposed architecture is on average three orders of magnitude smaller than that of a software-based approach. A magnetic tunnel junction (MTJ)-based logic-in-memory architec- ture is presented that allows simple routing and eliminates leakage current in standby using 90 nm CMOS/MTJ-hybrid technologies.

Télécharger le manuscrit.

Bibtex
@article{JarOniGriSakSugEndOhnHanGro2014,
  author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Noboru Sakimura and Tadahiko
Sugibayashi and Tetsuo Endoh and Hideo Ohno and
Takahiro Hanyu and Warren J. Gross},
  title = {A Non-Volatile Associative Memory-Based
Context-Driven Search Engine Using 90 nm CMOS
MTJ-Hybrid Logic-in-Memory Architecture},
  journal = {Journal on Emerging and Selected Topics
in Circuits and Systems},
  year = {2014},
  volume = {4},
  pages = {460--474},
}

Algorithm and Architecture of Fully-Parallel Associative Memories Based on Sparse Clustered Networks

H. Jarollahi, N. Onizawa, V. Gripon et W. J. Gross, "Algorithm and Architecture of Fully-Parallel Associative Memories Based on Sparse Clustered Networks," dans Journal of Signal Processing Systems, pp. 1--13, 2014.

Associative memories retrieve stored information given partial or erroneous input patterns. A new family of associative memories based on Sparse Clustered Networks (SCNs) has been recently introduced that can store many more messages than classical Hopfield-Neural Networks (HNNs). In this paper, we propose fully-parallel hardware architectures of such memories for partial or erroneous inputs. The proposed architectures eliminate winner-take-all modules and thus reduce the hardware complexity by consuming 65% fewer FPGA lookup tables and increase the operating frequency by approximately 1.9 times compared to that of previous work. Furthermore, the scaling behaviour of the implemented architectures for various design choices are investigated. We explore the effect of varying design variables such as the number of clusters, network nodes, and erased symbols on the error performance and the hardware resources.

Télécharger le manuscrit.

Bibtex
@article{JarOniGriGro2014,
  author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Warren J. Gross},
  title = {Algorithm and Architecture of
Fully-Parallel Associative Memories Based on Sparse
Clustered Networks},
  journal = {Journal of Signal Processing Systems},
  year = {2014},
  pages = {1--13},
}

Storing sparse messages in networks of neural cliques

B. K. Aliabadi, C. Berrou, V. Gripon et X. Jiang, "Storing sparse messages in networks of neural cliques," dans IEEE Transactions on Neural Networks and Learning Systems, Volume 25, pp. 980--989, 2014.

Nous proposons une extension d'un réseau de neurones binaires récemment introduit qui permet l'apprentissage de messages parcimonieux, en grand nombre et avec une efficacité de mémorisation importante. Ce nouveau réseau est motivé à la fois par des aspects biologiques et informationels. Les règles d'apprentissage et de remémoration sont détaillées et illustrées par des résultats de simulations divers.

Télécharger le manuscrit.

Bibtex
@article{AliBerGriJia2014,
  author = {Behrooz Kamary Aliabadi and Claude Berrou
and Vincent Gripon and Xiaoran Jiang},
  title = {Storing sparse messages in networks of
neural cliques},
  journal = {IEEE Transactions on Neural Networks and
Learning Systems},
  year = {2014},
  volume = {25},
  pages = {980--989},
}

Sparse neural networks with large learning diversity

V. Gripon et C. Berrou, "Sparse neural networks with large learning diversity," dans IEEE Transactions on Neural Networks, Volume 22, Number 7, pp. 1087--1096, juillet 2011.

Des réseaux de neurones avec trois niveaux de parcimonie sont introduits. Le premier d’entre eux est la taille des messages, bien plus petite que le nombre de neurones des réseaux. Le second provient d’une règle de codage singulière qui agit comme une contrainte locale sur l’activité neuronale. Le troisième est le caractère creux du réseau lui même tel qu’il se présente à la fin de l’apprentissage. Bien que ce modèle proposé soit très simple, s’appuyant sur des neurones et des connexions binaires, il peut apprendre et retrouver un grand nombre de messages même en présence de nombreux effacements.

Télécharger le manuscrit.

Bibtex
@article{GriBer20117,
  author = {Vincent Gripon and Claude Berrou},
  title = {Sparse neural networks with large learning
diversity},
  journal = {IEEE Transactions on Neural Networks},
  year = {2011},
  volume = {22},
  number = {7},
  pages = {1087--1096},
  month = {July},
}




Vous êtes le 531759ème visiteur

Site de Vincent Gripon