A Model of Bottom-Up Visual Attention Using Cortical Magnification
The focus of visual attention has been argued to play a key role in object recognition. Many computational models of visual attention were proposed to estimate locations of eye fixations driven by bottom-up stimuli. Most of these models rely on pyramids consisting of multiple scaled versions of the visual scene. This design aims at capturing the fact that neural cells in higher visual areas tend to have larger receptive fields (RFs). On the other hand, very few models represent multi-scaling resulting from the eccentricity-dependent RF sizes within each visual layer, also known as the cortical magnification effect. In this paper, we demonstrate that using a cortical-magnification-like mechanism can lead to performant alternatives to pyramidal approaches in the context of attentional modeling. Moreover, we argue that introducing such a mechanism equips the proposed model with additional properties related to overt attention and distance-dependent saliency that are worth exploring.
Download manuscript.
Bibtex@inproceedings{AboGriCop20154,
author = {Ala Aboudib and Vincent Gripon and Gilles
Coppin},
title = {A Model of Bottom-Up Visual Attention Using
Cortical Magnification},
booktitle = {Proceedings of ICASSP},
year = {2015},
pages = {1493--1497},
month = {April},
}