« Back
Top-down vs. bottom-up guidance of eye movements in real-world scene search
Top-down vs. bottom-up guidance of eye movements in real-world scene search
Submitted on 04 Oct 2018

Tamer Ajaj, Milena T. Bagdasarian, Katherine Eddinger and Rong Guo
Neural Information Processing Group, Technische Universität Berlin
This poster was presented at NI Conference 2018 Berlin
Poster Views: 274
View poster »
Poster Abstract
Visual search is used in a variety of activities. When modeled experimentally, cued object search tasks can be used to investigate the search strategy humans apply when looking for an object. Two of the main factors that influence eye movement during object search are bottom-up processes like the biologically motivated saliency and top-down processes like contextual priors. While recent work focusses on creating unified models, we investigate the degree in which two models influence visual search. In this paper we describe a method that makes individual models comparable which results in a scoring function. We apply this scoring function to bottom-up saliency maps and top-down contextual prior maps and analyze their predictive behavior of fixation points. We show that the first points participants fixate on are influenced by contextual information and once this information does not lead to the finding of the cued object salient information becomes more dominant.

1. Mohr, Johannes, Julia Seyfarth, Andreas Lueschow, Joachim E. Weber, Felix A. Wichmann, and Klaus Obermayer. "BOiS—Berlin Object in Scene Database: Controlled Photographic Images for Visual Search Experiments with Quantified Contextual Priors." Frontiers in psychology 7 (2016): 749.
2. Torralba, Antonio, Aude Oliva, Monica S. Castelhano, and John M. Henderson. "Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search." Psychological review 113, no. 4 (2006): 766.
3. Itti, Laurent, Christof Koch, and Ernst Niebur. "A model of saliency-based visual attention for rapid scene analysis." IEEE Transactions on pattern analysis and machine intelligence 20, no. 11 (1998): 1254-1259.
4. Wolfe, Jeremy M., and Todd S. Horowitz. "Five factors that guide attention in visual search." Nature Human Behaviour 1, no. 3 (2017): 0058.
5. Koehler, Kathryn, Fei Guo, Sheng Zhang, and Miguel P. Eckstein. "What do saliency models pre
Report abuse »
Ask the author a question about this poster.
Ask a Question »

Creative Commons

Related Posters

Strategies for High-Titer Protein Expression Using the ExpiCHO and Expi293 Transient Expression Systems
Chao Yan Liu, Jian Liu, Wanhua Yan, Kyle Williston, Katy Irvin, Henry Chou, Jonathan Zmuda

A Chemically-Defined Baculovirus-Based Expression System for Enhanced Protein Production in Sf9 Cells
Maya Yovcheva, Sara Barnes, Kenneth Thompson, Melissa Cross, Katy Irvin, Mintu Desai, Natasha Lucki, Henry Chiou, Jonathan Zmuda

Localisation of HN Gene Expression of NDV-AF2240 in Brain of 4T1 Induced Breast Cancer BALB/c mice
Umar Ahmad, Gholamreza Motalleb, Asmah Rahmat, Fauziah Othman, & Aini Ideris

Things just keep changing: Family caregivers' perceptions of the impact of Huntington's disease on caregiver burden
Jess Kaplonyi, Dr Christopher Lind, Catherine Christian, Irene Scott

Neuromyelitis Optica: Case Report
Antonio Fernandes