Site de Vincent Gripon

Blog sur mes recherches et mon enseignement

A Model of Bottom-Up Visual Attention Using Cortical Magnification

A. Aboudib, V. Gripon et G. Coppin, "A Model of Bottom-Up Visual Attention Using Cortical Magnification," dans Proceedings of ICASSP, pp. 1493--1497, avril 2015.

The focus of visual attention has been argued to play a key role in object recognition. Many computational models of visual attention were proposed to estimate locations of eye fixations driven by bottom-up stimuli. Most of these models rely on pyramids consisting of multiple scaled versions of the visual scene. This design aims at capturing the fact that neural cells in higher visual areas tend to have larger receptive fields (RFs). On the other hand, very few models represent multi-scaling resulting from the eccentricity-dependent RF sizes within each visual layer, also known as the cortical magnification effect. In this paper, we demonstrate that using a cortical-magnification-like mechanism can lead to performant alternatives to pyramidal approaches in the context of attentional modeling. Moreover, we argue that introducing such a mechanism equips the proposed model with additional properties related to overt attention and distance-dependent saliency that are worth exploring.

Télécharger le manuscrit.

Bibtex
@inproceedings{AboGriCop20154,
  author = {Ala Aboudib and Vincent Gripon and Gilles
Coppin},
  title = {A Model of Bottom-Up Visual Attention Using
Cortical Magnification},
  booktitle = {Proceedings of ICASSP},
  year = {2015},
  pages = {1493--1497},
  month = {April},
}




Vous êtes le 1985500ème visiteur

Site de Vincent Gripon