Vincent Gripon's Homepage

Research and Teaching Blog

Algorithm and Architecture of Fully-Parallel Associative Memories Based on Sparse Clustered Networks

H. Jarollahi, N. Onizawa, V. Gripon and W. J. Gross, "Algorithm and Architecture of Fully-Parallel Associative Memories Based on Sparse Clustered Networks," in Journal of Signal Processing Systems, pp. 1--13, 2014.

Associative memories retrieve stored information given partial or erroneous input patterns. A new family of associative memories based on Sparse Clustered Networks (SCNs) has been recently introduced that can store many more messages than classical Hopfield-Neural Networks (HNNs). In this paper, we propose fully-parallel hardware architectures of such memories for partial or erroneous inputs. The proposed architectures eliminate winner-take-all modules and thus reduce the hardware complexity by consuming 65% fewer FPGA lookup tables and increase the operating frequency by approximately 1.9 times compared to that of previous work. Furthermore, the scaling behaviour of the implemented architectures for various design choices are investigated. We explore the effect of varying design variables such as the number of clusters, network nodes, and erased symbols on the error performance and the hardware resources.

Download manuscript.

Bibtex
@article{JarOniGriGro2014,
  author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Warren J. Gross},
  title = {Algorithm and Architecture of
Fully-Parallel Associative Memories Based on Sparse
Clustered Networks},
  journal = {Journal of Signal Processing Systems},
  year = {2014},
  pages = {1--13},
}




You are the 505949th visitor

Vincent Gripon's Homepage