Site de Vincent Gripon

Blog sur mes recherches et mon enseignement

Articles en journal

2022

Y. Bendou, Y. Hu, R. Lafargue, G. Lioi, B. Pasdeloup, S. Pateux et V. Gripon, "Easy—Ensemble Augmented-Shot-Y-Shaped Learning: State-of-the-Art Few-Shot Classification with Simple Components," dans MDPI Journal of Imaging, Volume 8, Number 7, juillet 2022. Manuscrit.

Y. Hu, S. Pateux et V. Gripon, "Squeezing Backbone Feature Distributions to the Max for Efficient Few-Shot Learning," dans Algorithms, Volume 15, Number 5, avril 2022. Manuscrit.

H. Tessier, V. Gripon, M. Léonardon, M. Arzel, T. Hannagan et D. Bertrand, "Rethinking Weight Decay For Efficient Neural Network Pruning," dans Journal of Imaging, Volume 8, Number 3, mars 2022. Manuscrit.

2021

C. Lassance, V. Gripon et A. Ortega, "Laplacian networks: Bounding indicator function smoothness for neural networks robustness," dans APSIPA Transactions on Signal and Information Processing, Volume 10, 2021. Manuscrit.

M. Bontonou, L. Béthune et V. Gripon, "Predicting the Generalization Ability of a Few-Shot Classifier," dans Information, Volume 12, Number 1, 2021. Manuscrit.

C. Lassance, V. Gripon et A. Ortega, "Representing Deep Neural Networks Latent Space Geometries with Graphs," dans Algorithms, Volume 14, Number 2, 2021. Manuscrit.

C. Lassance, Y. Latif, R. Garg, V. Gripon et I. Reid, "Improved Visual Localization via Graph Filtering," dans Journal of Imaging, Volume 7, Number 2, 2021. Manuscrit.

V. Gripon, M. Löwe et F. Vermet, "Some Remarks on Replicated Simulated Annealing," dans Journal of Statistical Physics, Volume 182, Number 3, pp. 1--22, 2021. Manuscrit.

G. Coiffier, G. B. Hacene et V. Gripon, "ThriftyNets: Convolutional Neural Networks with Tiny Parameter Budget," dans IoT, Volume 2, Number 2, 2021. Manuscrit.

G. Lioi, V. Gripon, A. Brahim, F. Rousseau et N. Farrugia, "Gradients of connectivity as graph Fourier bases of brain activity ," dans Network Neuroscience, Volume 5, Number 2, pp. 322--336, mars 2021. Manuscrit.

P. Novac, G. B. Hacene, A. Pegatoquet, B. Miramond et V. Gripon, "Quantization and Deployment of Deep Neural Networks on Microcontrollers," dans Sensors, Volume 21, Number 9, janvier 2021. Manuscrit.

2018

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel et M. Jezequel, "Transfer Incremental Learning Using Data Augmentation," dans Applied Sciences, Volume 8, Number 12, 2018. Manuscrit.

A. Iscen, T. Furon, V. Gripon, M. Rabbat et H. Jégou, "Memory vectors for similarity search in high-dimensional spaces," dans IEEE Transactions on Big Data, pp. 65--77, 2018.

V. Gripon, M. Löwe et F. Vermet, "Associative Memories to Accelerate Approximate Nearest Neighbor Search," dans Applied Sciences, Volume 8, Number 9, septembre 2018. Manuscrit.

A. Mheich, M. Hassan, M. Khalil, V. Gripon, O. Dufor et F. Wendling, "SimiNet: a Novel Method for Quantifying Brain Network Similarity," dans IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 40, Number 9, pp. 2238--2249, septembre 2018.

B. Pasdeloup, V. Gripon, G. Mercier, D. Pastor et M. Rabbat, "Characterization and Inference of Graph Diffusion Processes from Observations of Stationary Signals," dans IEEE Transactions on Signal and Information Processing over Networks, Volume 4, Number 3, pp. 481--496, septembre 2018. Manuscrit.

2016

H. Jarollahi, V. Gripon, N. Onizawa et W. J. Gross, "Algorithm and Architecture for a Low-Power Content-Addressable Memory Based on Sparse-Clustered Networks," dans Transactions on Very Large Scale Integration Systems, Volume 27, Number 2, pp. 375--387, 2016. Manuscrit.

X. Jiang, V. Gripon, C. Berrou et M. Rabbat, "Storing sequences in binary tournament-based neural networks," dans IEEE Transactions on Neural Networks and Learning Systems, Volume 27, Number 5, pp. 913--925, 2016. Manuscrit.

B. Boguslawski, V. Gripon, F. Seguin et F. Heitzmann, "Twin Neurons for Efficient Real-World Data Distribution in Networks of Neural Cliques. Applications in Power Management in Electronic circuits," dans IEEE Transactions on Neural Networks and Learning Systems, Volume 27, Number 2, pp. 375--387, 2016. Manuscrit.

V. Gripon, J. Heusel, M. Löwe et F. Vermet, "A Comparative Study of Sparse Associative Memories," dans Journal of Statistical Physics, Volume 164, pp. 105--129, 2016. Manuscrit.

A. Aboudib, V. Gripon et G. Coppin, "A Biologically Inspired Framework for Visual Information Processing and an Application on Modeling Bottom-Up Visual Attention," dans Cognitive Computation, pp. 1--20, septembre 2016. Manuscrit.

G. Soulié, V. Gripon et M. Robert, "Compression of Deep Neural Networks on the Fly," dans Lecture Notes in Computer Science, Volume 9887, pp. 153--170, septembre 2016. Manuscrit.

A. Aboudib, V. Gripon et G. Coppin, "A Neural Network Model for Solving the Feature Correspondence Problem," dans Lecture Notes in Computer Science, Volume 9887, pp. 439--446, septembre 2016. Manuscrit.

2015

F. Leduc-Primeau, V. Gripon, M. Rabbat et W. J. Gross, "Fault-Tolerant Associative Memories Based on c-Partite Graphs," dans IEEE Transactions on Signal Processing, Volume 64, Number 4, pp. 829--841, 2015. Manuscrit.

2014

B. K. Aliabadi, C. Berrou, V. Gripon et X. Jiang, "Storing sparse messages in networks of neural cliques," dans IEEE Transactions on Neural Networks and Learning Systems, Volume 25, pp. 980--989, 2014. Manuscrit.

H. Jarollahi, N. Onizawa, V. Gripon et W. J. Gross, "Algorithm and Architecture of Fully-Parallel Associative Memories Based on Sparse Clustered Networks," dans Journal of Signal Processing Systems, pp. 1--13, 2014. Manuscrit.

H. Jarollahi, N. Onizawa, V. Gripon, N. Sakimura, T. Sugibayashi, T. Endoh, H. Ohno, T. Hanyu et W. J. Gross, "A Non-Volatile Associative Memory-Based Context-Driven Search Engine Using 90 nm CMOS MTJ-Hybrid Logic-in-Memory Architecture," dans Journal on Emerging and Selected Topics in Circuits and Systems, Volume 4, pp. 460--474, 2014. Manuscrit.

2011

V. Gripon et C. Berrou, "Sparse neural networks with large learning diversity," dans IEEE Transactions on Neural Networks, Volume 22, Number 7, pp. 1087--1096, juillet 2011. Manuscrit.

Easy—Ensemble Augmented-Shot-Y-Shaped Learning: State-of-the-Art Few-Shot Classification with Simple Components

Y. Bendou, Y. Hu, R. Lafargue, G. Lioi, B. Pasdeloup, S. Pateux et V. Gripon, "Easy—Ensemble Augmented-Shot-Y-Shaped Learning: State-of-the-Art Few-Shot Classification with Simple Components," dans MDPI Journal of Imaging, Volume 8, Number 7, juillet 2022.

Few-shot classification aims at leveraging knowledge learned in a deep learning model, in order to obtain good classification performance on new problems, where only a few labeled samples per class are available. Recent years have seen a fair number of works in the field, each one introducing their own methodology. A frequent problem, though, is the use of suboptimally trained models as a first building block, leading to doubts about whether proposed approaches bring gains if applied to more sophisticated pretrained models. In this work, we propose a simple way to train such models, with the aim of reaching top performance on multiple standardized benchmarks in the field. This methodology offers a new baseline on which to propose (and fairly compare) new techniques or adapt existing ones.

Télécharger le manuscrit.

Bibtex
@article{BenHuLafLioPasPatGri20227,
  author = {Yassir Bendou and Yuqing Hu and Raphael
Lafargue and Giulia Lioi and Bastien Pasdeloup and
Stéphane Pateux and Vincent Gripon},
  title = {Easy—Ensemble Augmented-Shot-Y-Shaped
Learning: State-of-the-Art Few-Shot Classification
with Simple Components},
  journal = {MDPI Journal of Imaging},
  year = {2022},
  volume = {8},
  number = {7},
  month = {July},
}

Squeezing Backbone Feature Distributions to the Max for Efficient Few-Shot Learning

Y. Hu, S. Pateux et V. Gripon, "Squeezing Backbone Feature Distributions to the Max for Efficient Few-Shot Learning," dans Algorithms, Volume 15, Number 5, avril 2022.

In many real-life problems, it is difficult to acquire or label large amounts of data, resulting in so-called few-shot learning problems. However, few-shot classification is a challenging problem due to the uncertainty caused by using few labeled samples. In the past few years, many methods have been proposed with the common aim of transferring knowledge acquired on a previously solved task, which is often achieved by using a pretrained feature extractor. As such, if the initial task contains many labeled samples, it is possible to circumvent the limitations of few-shot learning. A shortcoming of existing methods is that they often require priors about the data distribution, such as the balance between considered classes. In this paper, we propose a novel transfer-based method with a double aim: providing state-of-the-art performance, as reported on standardized datasets in the field of few-shot learning, while not requiring such restrictive priors. Our methodology is able to cope with both inductive cases, where prediction is performed on test samples independently from each other, and transductive cases, where a joint (batch) prediction is performed.

Télécharger le manuscrit.

Bibtex
@article{HuPatGri20224,
  author = {Yuqing Hu and Stéphane Pateux and Vincent
Gripon},
  title = {Squeezing Backbone Feature Distributions to
the Max for Efficient Few-Shot Learning},
  journal = {Algorithms},
  year = {2022},
  volume = {15},
  number = {5},
  month = {April},
}

Rethinking Weight Decay For Efficient Neural Network Pruning

H. Tessier, V. Gripon, M. Léonardon, M. Arzel, T. Hannagan et D. Bertrand, "Rethinking Weight Decay For Efficient Neural Network Pruning," dans Journal of Imaging, Volume 8, Number 3, mars 2022.

Introduced in the late 80's for generalization purposes, pruning has now become a staple to compress deep neural networks. Despite many innovations brought in the last decades, pruning approaches still face core issues that hinder their performance or scalability. Drawing inspiration from early work in the field, and especially the use of weight-decay to achieve sparsity, we introduce Selective Weight Decay (SWD), which realizes efficient continuous pruning throughout training. Our approach, theoretically-grounded on Lagrangian Smoothing, is versatile and can be applied to multiple tasks, networks and pruning structures. We show that SWD compares favorably to state-of-the-art approaches in terms of performance/parameters ratio on the CIFAR-10, Cora and ImageNet ILSVRC2012 datasets.

Télécharger le manuscrit.

Bibtex
@article{TesGriLéArzHanBer20223,
  author = {Hugo Tessier and Vincent Gripon and
Mathieu Léonardon and Matthieu Arzel and Thomas
Hannagan and David Bertrand},
  title = {Rethinking Weight Decay For Efficient
Neural Network Pruning},
  journal = {Journal of Imaging},
  year = {2022},
  volume = {8},
  number = {3},
  month = {March},
}

Laplacian networks: Bounding indicator function smoothness for neural networks robustness

C. Lassance, V. Gripon et A. Ortega, "Laplacian networks: Bounding indicator function smoothness for neural networks robustness," dans APSIPA Transactions on Signal and Information Processing, Volume 10, 2021.

For the past few years, deep learning (DL) robustness (i.e. the ability to maintain the same decision when inputs are subject to perturbations) has become a question of paramount importance, in particular in settings where misclassification can have dramatic consequences. To address this question, authors have proposed different approaches, such as adding regularizers or training using noisy examples. In this paper we introduce a regularizer based on the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DL architecture. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes, and as such enforces smooth variations of the class boundaries. We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness on classical supervised learning vision datasets for various types of perturbations. We also show it can be combined with existing methods to increase overall robustness.

Télécharger le manuscrit.

Bibtex
@article{LasGriOrt2021,
  author = {Carlos Lassance and Vincent Gripon and
Antonio Ortega},
  title = {Laplacian networks: Bounding indicator
function smoothness for neural networks robustness},
  journal = {APSIPA Transactions on Signal and
Information Processing},
  year = {2021},
  volume = {10},
}

Predicting the Generalization Ability of a Few-Shot Classifier

M. Bontonou, L. Béthune et V. Gripon, "Predicting the Generalization Ability of a Few-Shot Classifier," dans Information, Volume 12, Number 1, 2021.

In the context of few-shot learning, one cannot measure the generalization ability of a trained classifier using validation sets, due to the small number of labeled samples. In this paper, we are interested in finding alternatives to answer the question: is my classifier generalizing well to new data? We investigate the case of transfer-based few-shot learning solutions, and consider three settings:(i) supervised where we only have access to a few labeled samples,(ii) semi-supervised where we have access to both a few labeled samples and a set of unlabeled samples and (iii) unsupervised where we only have access to unlabeled samples. For each setting, we propose reasonable measures that we empirically demonstrate to be correlated with the generalization ability of the considered classifiers. We also show that these simple measures can predict the generalization ability up to a certain confidence. We conduct our experiments on standard few-shot vision datasets.

Télécharger le manuscrit.

Bibtex
@article{BonBéGri2021,
  author = {Myriam Bontonou and Louis Béthune and
Vincent Gripon},
  title = {Predicting the Generalization Ability of a
Few-Shot Classifier},
  journal = {Information},
  year = {2021},
  volume = {12},
  number = {1},
}

Representing Deep Neural Networks Latent Space Geometries with Graphs

C. Lassance, V. Gripon et A. Ortega, "Representing Deep Neural Networks Latent Space Geometries with Graphs," dans Algorithms, Volume 14, Number 2, 2021.

Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks. The core principle of DL methods consists of training composite architectures in an end-to-end fashion, where inputs are associated with outputs trained to optimize an objective function. Because of their compositional nature, DL architectures naturally exhibit several intermediate representations of the inputs, which belong to so-called latent spaces. When treated individually, these intermediate representations are most of the time unconstrained during the learning process, as it is unclear which properties should be favored. However, when processing a batch of inputs concurrently, the corresponding set of intermediate representations exhibit relations (what we call a geometry) on which desired properties can be sought. In this work, we show that it is possible to introduce constraints on these latent geometries to address various problems. In more detail, we propose to represent geometries by constructing similarity graphs from the intermediate representations obtained when processing a batch of inputs. By constraining these Latent Geometry Graphs (LGGs), we address the three following problems: (i) reproducing the behavior of a teacher architecture is achieved by mimicking its geometry, (ii) designing efficient embeddings for classification is achieved by targeting specific geometries, and (iii) robustness to deviations on inputs is achieved via enforcing smooth variation of geometry between consecutive latent spaces. Using standard vision benchmarks, we demonstrate the ability of the proposed geometry-based methods in solving the considered problems.

Télécharger le manuscrit.

Bibtex
@article{LasGriOrt2021,
  author = {Carlos Lassance and Vincent Gripon and
Antonio Ortega},
  title = {Representing Deep Neural Networks Latent
Space Geometries with Graphs},
  journal = {Algorithms},
  year = {2021},
  volume = {14},
  number = {2},
}

Improved Visual Localization via Graph Filtering

C. Lassance, Y. Latif, R. Garg, V. Gripon et I. Reid, "Improved Visual Localization via Graph Filtering," dans Journal of Imaging, Volume 7, Number 2, 2021.

Vision-based localization is the problem of inferring the pose of the camera given a single image. One commonly used approach relies on image retrieval where the query input is compared against a database of localized support examples and its pose is inferred with the help of the retrieved items. This assumes that images taken from the same places consist of the same landmarks and thus would have similar feature representations. These representations can learn to be robust to different variations in capture conditions like time of the day or weather. In this work, we introduce a framework which aims at enhancing the performance of such retrieval-based localization methods. It consists in taking into account additional information available, such as GPS coordinates or temporal proximity in the acquisition of the images. More precisely, our method consists in constructing a graph based on this additional information that is later used to improve reliability of the retrieval process by filtering the feature representations of support and/or query images. We show that the proposed method is able to significantly improve the localization accuracy on two large scale datasets, as well as the mean average precision in classical image retrieval scenarios.

Télécharger le manuscrit.

Bibtex
@article{LasLatGarGriRei2021,
  author = {Carlos Lassance and Yasir Latif and Ravi
Garg and Vincent Gripon and Ian Reid},
  title = {Improved Visual Localization via Graph
Filtering},
  journal = {Journal of Imaging},
  year = {2021},
  volume = {7},
  number = {2},
}

Some Remarks on Replicated Simulated Annealing

V. Gripon, M. Löwe et F. Vermet, "Some Remarks on Replicated Simulated Annealing," dans Journal of Statistical Physics, Volume 182, Number 3, pp. 1--22, 2021.

Recently authors have introduced the idea of training discrete weights neural networks using a mix between classical simulated annealing and a replica ansatz known from the statistical physics literature. Among other points, they claim their method is able to find robust configurations. In this paper, we analyze this so called “replicated simulated annealing” algorithm. In particular, we give criteria to guarantee its convergence, and study when it successfully samples from configurations. We also perform experiments using synthetic and real data bases.

Télécharger le manuscrit.

Bibtex
@article{GriLöVer2021,
  author = {Vicent Gripon and Matthias Löwe and
Franck Vermet},
  title = {Some Remarks on Replicated Simulated
Annealing},
  journal = {Journal of Statistical Physics},
  year = {2021},
  volume = {182},
  number = {3},
  pages = {1--22},
}

ThriftyNets: Convolutional Neural Networks with Tiny Parameter Budget

G. Coiffier, G. B. Hacene et V. Gripon, "ThriftyNets: Convolutional Neural Networks with Tiny Parameter Budget," dans IoT, Volume 2, Number 2, 2021.

Deep Neural Networks are state-of-the-art in a large number of challenges in machine learning. However, to reach the best performance they require a huge pool of parameters. Indeed, typical deep convolutional architectures present an increasing number of feature maps as we go deeper in the network, whereas spatial resolution of inputs is decreased through downsampling operations. This means that most of the parameters lay in the final layers, while a large portion of the computations are performed by a small fraction of the total parameters in the first layers. In an effort to use every parameter of a network at its maximum, we propose a new convolutional neural network architecture, called ThriftyNet. In ThriftyNet, only one convolutional layer is defined and used recursively, leading to a maximal parameter factorization. In complement, normalization, non-linearities, downsamplings and shortcut ensure sufficient expressivity of the model. ThriftyNet achieves competitive performance on a tiny parameters budget, exceeding 91% accuracy on CIFAR-10 with less than 40 k parameters in total, 74.3% on CIFAR-100 with less than 600 k parameters, and 67.1% On ImageNet ILSVRC 2012 with no more than 4.15 M parameters. However, the proposed method typically requires more computations than existing counterparts.

Télécharger le manuscrit.

Bibtex
@article{CoiHacGri2021,
  author = {Guillaume Coiffier and Ghouthi Boukli
Hacene and Vincent Gripon},
  title = {ThriftyNets: Convolutional Neural Networks
with Tiny Parameter Budget},
  journal = {IoT},
  year = {2021},
  volume = {2},
  number = {2},
}

Gradients of connectivity as graph Fourier bases of brain activity

G. Lioi, V. Gripon, A. Brahim, F. Rousseau et N. Farrugia, "Gradients of connectivity as graph Fourier bases of brain activity ," dans Network Neuroscience, Volume 5, Number 2, pp. 322--336, mars 2021.

The application of graph theory to model the complex structure and function of the brain has shed new light on its organization, prompting the emergence of network neuroscience. Despite the tremendous progress that has been achieved in this field, still relatively few methods exploit the topology of brain networks to analyze brain activity. Recent attempts in this direction have leveraged on the one hand graph spectral analysis (to decompose brain connectivity into eigenmodes or gradients) and the other graph signal processing (to decompose brain activity "coupled to" an underlying network in graph Fourier modes). These studies have used a variety of imaging techniques (e.g., fMRI, electroencephalography, diffusion-weighted and myelin-sensitive imaging) and connectivity estimators to model brain networks. Results are promising in terms of interpretability and functional relevance, but methodologies and terminology are variable. The goals of this paper are twofold. First, we summarize recent contributions related to connectivity gradients and graph signal processing, and attempt a clarification of the terminology and methods used in the field, while pointing out current methodological limitations. Second, we discuss the perspective that the functional relevance of connectivity gradients could be fruitfully exploited by considering them as graph Fourier bases of brain activity.

Télécharger le manuscrit.

Bibtex
@article{LioGriBraRouFar20213,
  author = {Giulia Lioi and Vincent Gripon and
Abdelbasset Brahim and François Rousseau and Nicolas
Farrugia},
  title = {Gradients of connectivity as graph Fourier
bases of brain activity },
  journal = {Network Neuroscience},
  year = {2021},
  volume = {5},
  number = {2},
  pages = {322--336},
  month = {March},
}

Quantization and Deployment of Deep Neural Networks on Microcontrollers

P. Novac, G. B. Hacene, A. Pegatoquet, B. Miramond et V. Gripon, "Quantization and Deployment of Deep Neural Networks on Microcontrollers," dans Sensors, Volume 21, Number 9, janvier 2021.

Embedding Artificial Intelligence onto low-power devices is a challenging task that has been partly overcome with recent advances in machine learning and hardware design. Presently, deep neural networks can be deployed on embedded targets to perform different tasks such as speech recognition, object detection or Human Activity Recognition. However, there is still room for optimization of deep neural networks onto embedded devices. These optimizations mainly address power consumption, memory and real-time constraints, but also an easier deployment at the edge. Moreover, there is still a need for a better understanding of what can be achieved for different use cases. This work focuses on quantization and deployment of deep neural networks onto low-power 32-bit microcontrollers. The quantization methods, relevant in the context of an embedded execution onto a microcontroller, are first outlined. Then, a new framework for end-to-end deep neural networks training, quantization and deployment is presented. This framework, called MicroAI, is designed as an alternative to existing inference engines (TensorFlow Lite for Microcontrollers and STM32Cube.AI). Our framework can indeed be easily adjusted and/or extended for specific use cases. Execution using single precision 32-bit floating-point as well as fixed-point on 8- and 16 bits integers are supported. The proposed quantization method is evaluated with three different datasets (UCI-HAR, Spoken MNIST and GTSRB). Finally, a comparison study between MicroAI and both existing embedded inference engines is provided in terms of memory and power efficiency. On-device evaluation is done using ARM Cortex-M4F-based microcontrollers (Ambiq Apollo3 and STM32L452RE).

Télécharger le manuscrit.

Bibtex
@article{NovHacPegMirGri20211,
  author = {Pierre-Emmanuel Novac and Ghouthi Boukli
Hacene and Alain Pegatoquet and Benoît Miramond and
Vincent Gripon},
  title = {Quantization and Deployment of Deep Neural
Networks on Microcontrollers},
  journal = {Sensors},
  year = {2021},
  volume = {21},
  number = {9},
  month = {January},
}

Transfer Incremental Learning Using Data Augmentation

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel et M. Jezequel, "Transfer Incremental Learning Using Data Augmentation," dans Applied Sciences, Volume 8, Number 12, 2018.

Deep learning-based methods have reached state of the art performances, relying on a large quantity of available data and computational power. Such methods still remain highly inappropriate when facing a major open machine learning problem, which consists of learning incrementally new classes and examples over time. Combining the outstanding performances of Deep Neural Networks (DNNs) with the flexibility of incremental learning techniques is a promising venue of research. In this contribution, we introduce Transfer Incremental Learning using Data Augmentation (TILDA). TILDA is based on pre-trained DNNs as feature extractors, robust selection of feature vectors in subspaces using a nearest-class-mean based technique, majority votes and data augmentation at both the training and the prediction stages. Experiments on challenging vision datasets demonstrate the ability of the proposed method for low complexity incremental learning, while achieving significantly better accuracy than existing incremental counterparts.

Télécharger le manuscrit.

Bibtex
@article{HacGriFarArzJez2018,
  author = {Ghouthi Boukli Hacene and Vincent Gripon
and Nicolas Farrugia and Matthieu Arzel and Michel
Jezequel},
  title = {Transfer Incremental Learning Using Data
Augmentation},
  journal = {Applied Sciences},
  year = {2018},
  volume = {8},
  number = {12},
}

Memory vectors for similarity search in high-dimensional spaces

A. Iscen, T. Furon, V. Gripon, M. Rabbat et H. Jégou, "Memory vectors for similarity search in high-dimensional spaces," dans IEEE Transactions on Big Data, pp. 65--77, 2018.

We study an indexing architecture to store and search in a database of high-dimensional vectors from the perspective of statistical signal processing and decision theory. This architecture is composed of several memory units, each of which summarizes a fraction of the database by a single representative vector. The potential similarity of the query to one of the vectors stored in the memory unit is gauged by a simple correlation with the memory unit’s representative vector. This representative optimizes the test of the following hypothesis: the query is independent from any vector in the memory unit vs. the query is a simple perturbation of one of the stored vectors. Compared to exhaustive search, our approach finds the most similar database vectors significantly faster without a noticeable reduction in search quality. Interestingly, the reduction of complexity is provably better in high-dimensional spaces. We empirically demonstrate its practical interest in a large-scale image search scenario with off-the-shelf state-of-the-art descriptors.


Bibtex
@article{IscFurGriRabJé2018,
  author = {Ahmet Iscen and Teddy Furon and Vincent
Gripon and Michael Rabbat and Hervé Jégou},
  title = {Memory vectors for similarity search in
high-dimensional spaces},
  journal = {IEEE Transactions on Big Data},
  year = {2018},
  pages = {65--77},
}

Associative Memories to Accelerate Approximate Nearest Neighbor Search

V. Gripon, M. Löwe et F. Vermet, "Associative Memories to Accelerate Approximate Nearest Neighbor Search," dans Applied Sciences, Volume 8, Number 9, septembre 2018.

Nearest neighbor search is a very active field in machine learning. It appears in many application cases, including classification and object retrieval. In its naive implementation, the complexity of the search is linear in the product of the dimension and the cardinality of the collection of vectors into which the search is performed. Recently, many works have focused on reducing the dimension of vectors using quantization techniques or hashing, while providing an approximate result. In this paper, we focus instead on tackling the cardinality of the collection of vectors. Namely, we introduce a technique that partitions the collection of vectors and stores each part in its own associative memory. When a query vector is given to the system, associative memories are polled to identify which one contains the closest match. Then, an exhaustive search is conducted only on the part of vectors stored in the selected associative memory. We study the effectiveness of the system when messages to store are generated from i.i.d. uniform ±1 random variables or 0–1 sparse i.i.d. random variables. We also conduct experiments on both synthetic data and real data and show that it is possible to achieve interesting trade-offs between complexity and accuracy.

Télécharger le manuscrit.

Bibtex
@article{GriLöVer20189,
  author = {Vincent Gripon and Matthias Löwe and
Franck Vermet},
  title = {Associative Memories to Accelerate
Approximate Nearest Neighbor Search},
  journal = {Applied Sciences},
  year = {2018},
  volume = {8},
  number = {9},
  month = {September},
}

SimiNet: a Novel Method for Quantifying Brain Network Similarity

A. Mheich, M. Hassan, M. Khalil, V. Gripon, O. Dufor et F. Wendling, "SimiNet: a Novel Method for Quantifying Brain Network Similarity," dans IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 40, Number 9, pp. 2238--2249, septembre 2018.

Quantifying the similarity between two networks is critical in many applications. A number of algorithms have been proposed to compute graph similarity, mainly based on the properties of nodes and edges. Interestingly, most of these algorithms ignore the physical location of the nodes, which is a key factor in the context of brain networks involving spatially defined functional areas. In this paper, we present a novel algorithm called "SimiNet" for measuring similarity between two graphs whose nodes are defined a priori within a 3D coordinate system. SimiNet provides a quantified index (ranging from 0 to 1) that accounts for node, edge and spatiality features. Complex graphs were simulated to evaluate the performance of SimiNet that is compared with eight state-of-art methods. Results show that SimiNet is able to detect weak spatial variations in compared graphs in addition to computing similarity using both nodes and edges. SimiNet was also applied to real brain networks obtained during a visual recognition task. The algorithm shows high performance to detect spatial variation of brain networks obtained during a naming task of two categories of visual stimuli: animals and tools. A perspective to this work is a better understanding of object categorization in the human brain.


Bibtex
@article{MheHasKhaGriDufWen201809,
  author = {Ahmad Mheich and Mahmoud Hassan and
Mohamad Khalil and Vincent Gripon and Olivier Dufor
and Fabrice Wendling},
  title = {SimiNet: a Novel Method for Quantifying
Brain Network Similarity},
  journal = {IEEE Transactions on Pattern Analysis and
Machine Intelligence},
  year = {2018},
  volume = {40},
  number = {9},
  pages = {2238--2249},
  month = {September},
}

Characterization and Inference of Graph Diffusion Processes from Observations of Stationary Signals

B. Pasdeloup, V. Gripon, G. Mercier, D. Pastor et M. Rabbat, "Characterization and Inference of Graph Diffusion Processes from Observations of Stationary Signals," dans IEEE Transactions on Signal and Information Processing over Networks, Volume 4, Number 3, pp. 481--496, septembre 2018.

Many tools from the field of graph signal processing exploit knowledge of the underlying graph’s structure (e.g., as encoded in the Laplacian matrix) to process signals on the graph. Therefore, in the case when no graph is available, graph signal processing tools cannot be used anymore. Researchers have proposed approaches to infer a graph topology from observations of signals on its vertices. Since the problem is ill-posed, these approaches make assumptions, such as smoothness of the signals on the graph, or sparsity priors. In this paper, we propose a characterization of the space of valid graphs, in the sense that they can explain stationary signals. To simplify the exposition in this paper, we focus here on the case where signals were i.i.d. at some point back in time and were observed after diffusion on a graph. We show that the set of graphs verifying this assumption has a strong connection with the eigenvectors of the covariance matrix, and forms a convex set. Along with a theoretical study in which these eigenvectors are assumed to be known, we consider the practical case when the observations are noisy, and experimentally observe how fast the set of valid graphs converges to the set obtained when the exact eigenvectors are known, as the number of observations grows. To illustrate how this characterization can be used for graph recovery, we present two methods for selecting a particular point in this set under chosen criteria, namely graph simplicity and sparsity. Additionally, we introduce a measure to evaluate how much a graph is adapted to signals under a stationarity assumption. Finally, we evaluate how state-of-the-art methods relate to this framework through experiments on a dataset of temperatures.

Télécharger le manuscrit.

Bibtex
@article{PasGriMerPasRab201809,
  author = {Bastien Pasdeloup and Vincent Gripon and
Grégoire Mercier and Dominique Pastor and Michael
Rabbat},
  title = {Characterization and Inference of Graph
Diffusion Processes from Observations of Stationary
Signals},
  journal = {IEEE Transactions on Signal and
Information Processing over Networks},
  year = {2018},
  volume = {4},
  number = {3},
  pages = {481--496},
  month = {September},
}

Algorithm and Architecture for a Low-Power Content-Addressable Memory Based on Sparse-Clustered Networks

H. Jarollahi, V. Gripon, N. Onizawa et W. J. Gross, "Algorithm and Architecture for a Low-Power Content-Addressable Memory Based on Sparse-Clustered Networks," dans Transactions on Very Large Scale Integration Systems, Volume 27, Number 2, pp. 375--387, 2016.

We propose a low-power content-addressable memory (CAM) employing a new algorithm for associativity between the input tag and the corresponding address of the output data. The proposed architecture is based on a recently developed sparse clustered network using binary connections that on-average eliminates most of the parallel comparisons per- formed during a search. Therefore, the dynamic energy con- sumption of the proposed design is significantly lower compared with that of a conventional low-power CAM design. Given an input tag, the proposed architecture computes a few possibilities for the location of the matched tag and performs the comparisons on them to locate a single valid match. TSMC 65-nm CMOS tech- nology was used for simulation purposes. Following a selection of design parameters, such as the number of CAM entries, the energy consumption and the search delay of the proposed design are 8%, and 26% of that of the conventional NAND architecture, respectively, with a 10% area overhead. A design methodology based on the silicon area and power budgets, and performance requirements is discussed.

Télécharger le manuscrit.

Bibtex
@article{JarGriOniGro2016,
  author = {Hooman Jarollahi and Vincent Gripon and
Naoya Onizawa and Warren J. Gross},
  title = {Algorithm and Architecture for a Low-Power
Content-Addressable Memory Based on Sparse-Clustered
Networks},
  journal = {Transactions on Very Large Scale
Integration Systems},
  year = {2016},
  volume = {27},
  number = {2},
  pages = {375--387},
}

Storing sequences in binary tournament-based neural networks

X. Jiang, V. Gripon, C. Berrou et M. Rabbat, "Storing sequences in binary tournament-based neural networks," dans IEEE Transactions on Neural Networks and Learning Systems, Volume 27, Number 5, pp. 913--925, 2016.

An extension to a recently introduced architecture of clique-based neural networks is presented. This extension makes it possible to store sequences with high eff i ciency. To obtain this property, network connections are provided with orientation and with f l exible redundancy carried by both spatial and temporal redundancy, a mechanism of anticipation being introduced in the model. In addition to the sequence storage with high efficiency, this new scheme also offers biological plausibility. In order to achieve accurate sequence retrieval, a double layered structure combining hetero-association and auto-association is also proposed.

Télécharger le manuscrit.

Bibtex
@article{JiaGriBerRab2016,
  author = {Xiaoran Jiang and Vincent Gripon and
Claude Berrou and Michael Rabbat},
  title = {Storing sequences in binary
tournament-based neural networks},
  journal = {IEEE Transactions on Neural Networks and
Learning Systems},
  year = {2016},
  volume = {27},
  number = {5},
  pages = {913--925},
}

Twin Neurons for Efficient Real-World Data Distribution in Networks of Neural Cliques. Applications in Power Management in Electronic circuits

B. Boguslawski, V. Gripon, F. Seguin et F. Heitzmann, "Twin Neurons for Efficient Real-World Data Distribution in Networks of Neural Cliques. Applications in Power Management in Electronic circuits," dans IEEE Transactions on Neural Networks and Learning Systems, Volume 27, Number 2, pp. 375--387, 2016.

Associative memories are datastructures that allow retrieval of previously stored messages given part of their content. They thus behave similarly to human brain’s memory that is capable, for instance, of retrieving the end of a song given its beginning. Among different families of associative memories, sparse ones are known to provide the best efficiency (ratio of the number of bits stored to that of bits used). Recently, a new family of sparse associative memories achieving almost-optimal efficiency has been proposed. Their structure induces a direct mapping between input messages and stored patterns. Nevertheless, it is well known that non-uniformity of the stored messages can lead to dramatic decrease in performance. In this work, we show the impact of non-uniformity on the performance of this recent model and we exploit the structure of the model to improve its performance in practical applications where data is not necessarily uniform. In order to approach the performance of networks with uniformly distributed messages presented in theoretical studies, twin neurons are introduced. To assess the adapted model, twin neurons are used with real-world data to optimize power consumption of electronic circuits in practical test-cases.

Télécharger le manuscrit.

Bibtex
@article{BogGriSegHei2016,
  author = {Bartosz Boguslawski and Vincent Gripon and
Fabrice Seguin and Frédéric Heitzmann},
  title = {Twin Neurons for Efficient Real-World Data
Distribution in Networks of Neural Cliques.
Applications in Power Management in Electronic
circuits},
  journal = {IEEE Transactions on Neural Networks and
Learning Systems},
  year = {2016},
  volume = {27},
  number = {2},
  pages = {375--387},
}

A Comparative Study of Sparse Associative Memories

V. Gripon, J. Heusel, M. Löwe et F. Vermet, "A Comparative Study of Sparse Associative Memories," dans Journal of Statistical Physics, Volume 164, pp. 105--129, 2016.

We study various models of associative memories with sparse information, i.e. a pattern to be stored is a random string of 0s and 1s with about logN 1s, only. We compare different synaptic weights, architectures and retrieval mechanisms to shed light on the influence of the various parameters on the storage capacity.

Télécharger le manuscrit.

Bibtex
@article{GriHeuLöVer2016,
  author = {Vincent Gripon and Judith Heusel and
Matthias Löwe and Franck Vermet},
  title = {A Comparative Study of Sparse Associative
Memories},
  journal = {Journal of Statistical Physics},
  year = {2016},
  volume = {164},
  pages = {105--129},
}

A Biologically Inspired Framework for Visual Information Processing and an Application on Modeling Bottom-Up Visual Attention

A. Aboudib, V. Gripon et G. Coppin, "A Biologically Inspired Framework for Visual Information Processing and an Application on Modeling Bottom-Up Visual Attention," dans Cognitive Computation, pp. 1--20, septembre 2016.

An emerging trend in visual information processing is toward incorporating some interesting properties of the ventral stream in order to account for some limitations of machine learning algorithms. Selective attention and cortical magnification are two such important phenomena that have been the subject of a large body of research in recent years. In this paper, we focus on designing a new model for visual acquisition that takes these important properties into account.

Télécharger le manuscrit.

Bibtex
@article{AboGriCop20169,
  author = {Ala Aboudib and Vincent Gripon and Gilles
Coppin},
  title = {A Biologically Inspired Framework for
Visual Information Processing and an Application on
Modeling Bottom-Up Visual Attention},
  journal = {Cognitive Computation},
  year = {2016},
  pages = {1--20},
  month = {September},
}

Compression of Deep Neural Networks on the Fly

G. Soulié, V. Gripon et M. Robert, "Compression of Deep Neural Networks on the Fly," dans Lecture Notes in Computer Science, Volume 9887, pp. 153--170, septembre 2016.

Thanks to their state-of-the-art performance, deep neural networks are increasingly used for object recognition. To achieve the best results, they use millions of parameters to be trained. However, when targetting embedded applications the size of these models becomes problematic. As a consequence, their usage on smartphones or other resource limited devices is prohibited. In this paper we introduce a novel compression method for deep neural networks that is performed during the learning phase. It consists in adding an extra regularization term to the cost function of fully-connected layers. We combine this method with Product Quantization (PQ) of the trained weights for higher savings in storage consumption. We evaluate our method on two data sets (MNIST and CIFAR10), on which we achieve significantly larger compression rates than state-of-the-art methods.

Télécharger le manuscrit.

Bibtex
@article{SouGriRob20169,
  author = {Guillaume Soulié and Vincent Gripon and
Maëlys Robert},
  title = {Compression of Deep Neural Networks on the
Fly},
  journal = {Lecture Notes in Computer Science},
  year = {2016},
  volume = {9887},
  pages = {153--170},
  month = {September},
}

A Neural Network Model for Solving the Feature Correspondence Problem

A. Aboudib, V. Gripon et G. Coppin, "A Neural Network Model for Solving the Feature Correspondence Problem," dans Lecture Notes in Computer Science, Volume 9887, pp. 439--446, septembre 2016.

Finding correspondences between image features is a fundamental question in computer vision. Many models in literature have proposed to view this as a graph matching problem whose solution can be approximated using optimization principles. In this paper, we propose a different treatment of this problem from a neural network perspective. We present a new model for matching features inspired by the architecture of a recently introduced neural network. We show that by using popular neural network principles like max-pooling, k-winners-take-all and iterative processing, we obtain a better accuracy at matching features in cluttered environments. The proposed solution is accompanied by an experimental evaluation and is compared to state-of-the-art models.

Télécharger le manuscrit.

Bibtex
@article{AboGriCop20169,
  author = {Ala Aboudib and Vincent Gripon and Gilles
Coppin},
  title = {A Neural Network Model for Solving the
Feature Correspondence Problem},
  journal = {Lecture Notes in Computer Science},
  year = {2016},
  volume = {9887},
  pages = {439--446},
  month = {September},
}

Fault-Tolerant Associative Memories Based on c-Partite Graphs

F. Leduc-Primeau, V. Gripon, M. Rabbat et W. J. Gross, "Fault-Tolerant Associative Memories Based on c-Partite Graphs," dans IEEE Transactions on Signal Processing, Volume 64, Number 4, pp. 829--841, 2015.

Associative memories allow the retrieval of previously stored messages given a part of their content. In this paper, we are interested in associative memories based on-partite graphs that were recently introduced. These memories are almost optimal in terms of the amount of storage they require (efficiency) and allow retrieving messages with low complexity. We propose a generic im- plementation model for the retrieval algorithm that can be readily mapped to an integrated circuit and study the retrieval performance when hardware components are affected by faults. We show using analytical and simulation results that these associative memories can be made resilient to circuit faults with a minor modification of the retrieval algorithm. In one example, the memory retains 88% of its efficiency when 1% of the storage cells are faulty, or 98% when 0.1% of the binary outputs of the retrieval algorithm are faulty. When considering storage faults, the fault tolerance exhibited by the proposed associative memory can be comparable tousing a capacity-achieving error correction code for protecting the stored information.

Télécharger le manuscrit.

Bibtex
@article{LedGriRabGro2015,
  author = {François Leduc-Primeau and Vincent Gripon
and Michael Rabbat and Warren J. Gross},
  title = {Fault-Tolerant Associative Memories Based
on c-Partite Graphs},
  journal = {IEEE Transactions on Signal Processing},
  year = {2015},
  volume = {64},
  number = {4},
  pages = {829--841},
}

Storing sparse messages in networks of neural cliques

B. K. Aliabadi, C. Berrou, V. Gripon et X. Jiang, "Storing sparse messages in networks of neural cliques," dans IEEE Transactions on Neural Networks and Learning Systems, Volume 25, pp. 980--989, 2014.

Nous proposons une extension d'un réseau de neurones binaires récemment introduit qui permet l'apprentissage de messages parcimonieux, en grand nombre et avec une efficacité de mémorisation importante. Ce nouveau réseau est motivé à la fois par des aspects biologiques et informationels. Les règles d'apprentissage et de remémoration sont détaillées et illustrées par des résultats de simulations divers.

Télécharger le manuscrit.

Bibtex
@article{AliBerGriJia2014,
  author = {Behrooz Kamary Aliabadi and Claude Berrou
and Vincent Gripon and Xiaoran Jiang},
  title = {Storing sparse messages in networks of
neural cliques},
  journal = {IEEE Transactions on Neural Networks and
Learning Systems},
  year = {2014},
  volume = {25},
  pages = {980--989},
}

Algorithm and Architecture of Fully-Parallel Associative Memories Based on Sparse Clustered Networks

H. Jarollahi, N. Onizawa, V. Gripon et W. J. Gross, "Algorithm and Architecture of Fully-Parallel Associative Memories Based on Sparse Clustered Networks," dans Journal of Signal Processing Systems, pp. 1--13, 2014.

Associative memories retrieve stored information given partial or erroneous input patterns. A new family of associative memories based on Sparse Clustered Networks (SCNs) has been recently introduced that can store many more messages than classical Hopfield-Neural Networks (HNNs). In this paper, we propose fully-parallel hardware architectures of such memories for partial or erroneous inputs. The proposed architectures eliminate winner-take-all modules and thus reduce the hardware complexity by consuming 65% fewer FPGA lookup tables and increase the operating frequency by approximately 1.9 times compared to that of previous work. Furthermore, the scaling behaviour of the implemented architectures for various design choices are investigated. We explore the effect of varying design variables such as the number of clusters, network nodes, and erased symbols on the error performance and the hardware resources.

Télécharger le manuscrit.

Bibtex
@article{JarOniGriGro2014,
  author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Warren J. Gross},
  title = {Algorithm and Architecture of
Fully-Parallel Associative Memories Based on Sparse
Clustered Networks},
  journal = {Journal of Signal Processing Systems},
  year = {2014},
  pages = {1--13},
}

A Non-Volatile Associative Memory-Based Context-Driven Search Engine Using 90 nm CMOS MTJ-Hybrid Logic-in-Memory Architecture

H. Jarollahi, N. Onizawa, V. Gripon, N. Sakimura, T. Sugibayashi, T. Endoh, H. Ohno, T. Hanyu et W. J. Gross, "A Non-Volatile Associative Memory-Based Context-Driven Search Engine Using 90 nm CMOS MTJ-Hybrid Logic-in-Memory Architecture," dans Journal on Emerging and Selected Topics in Circuits and Systems, Volume 4, pp. 460--474, 2014.

This paper presents algorithm, architecture, and fabrication results of a nonvolatile context-driven search engine that reduces energy consumption as well as computational delay compared to classical hardware and software-based approaches. The proposed architecture stores only associations between items from multiple search fields in the form of binary links, and merges repeated field items to reduce the memory requirements and ac- cesses. The fabricated chip achievesmemory reduction and 89% energy saving compared to a classical field-based approach in hardware, based on content-addressable memory (CAM). Furthermore, it achievesreduced number of clock cycles in performing search operations compared to the CAM, and five or- ders of magnitude reduced number of clock cycles compared to a fabricated and measured ultra low-power CPU-based counterpart running a classical search algorithm in software. The energy con- sumption of the proposed architecture is on average three orders of magnitude smaller than that of a software-based approach. A magnetic tunnel junction (MTJ)-based logic-in-memory architec- ture is presented that allows simple routing and eliminates leakage current in standby using 90 nm CMOS/MTJ-hybrid technologies.

Télécharger le manuscrit.

Bibtex
@article{JarOniGriSakSugEndOhnHanGro2014,
  author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Noboru Sakimura and Tadahiko
Sugibayashi and Tetsuo Endoh and Hideo Ohno and
Takahiro Hanyu and Warren J. Gross},
  title = {A Non-Volatile Associative Memory-Based
Context-Driven Search Engine Using 90 nm CMOS
MTJ-Hybrid Logic-in-Memory Architecture},
  journal = {Journal on Emerging and Selected Topics
in Circuits and Systems},
  year = {2014},
  volume = {4},
  pages = {460--474},
}

Sparse neural networks with large learning diversity

V. Gripon et C. Berrou, "Sparse neural networks with large learning diversity," dans IEEE Transactions on Neural Networks, Volume 22, Number 7, pp. 1087--1096, juillet 2011.

Des réseaux de neurones avec trois niveaux de parcimonie sont introduits. Le premier d’entre eux est la taille des messages, bien plus petite que le nombre de neurones des réseaux. Le second provient d’une règle de codage singulière qui agit comme une contrainte locale sur l’activité neuronale. Le troisième est le caractère creux du réseau lui même tel qu’il se présente à la fin de l’apprentissage. Bien que ce modèle proposé soit très simple, s’appuyant sur des neurones et des connexions binaires, il peut apprendre et retrouver un grand nombre de messages même en présence de nombreux effacements.

Télécharger le manuscrit.

Bibtex
@article{GriBer20117,
  author = {Vincent Gripon and Claude Berrou},
  title = {Sparse neural networks with large learning
diversity},
  journal = {IEEE Transactions on Neural Networks},
  year = {2011},
  volume = {22},
  number = {7},
  pages = {1087--1096},
  month = {July},
}




Vous êtes le 1970421ème visiteur

Site de Vincent Gripon