Posters
« Back
Top-down vs. bottom-up guidance of eye movements in real-world scene search
EP29265
Poster Title: Top-down vs. bottom-up guidance of eye movements in real-world scene search
Submitted on 04 Oct 2018
Author(s): Tamer Ajaj, Milena T. Bagdasarian, Katherine Eddinger and Rong Guo
Affiliations: Neural Information Processing Group, Technische Universität Berlin
This poster was presented at NI Conference 2018 Berlin
Poster Views: 459
View poster »


Poster Information
Abstract: Visual search is used in a variety of activities. When modeled experimentally, cued object search tasks can be used to investigate the search strategy humans apply when looking for an object. Two of the main factors that influence eye movement during object search are bottom-up processes like the biologically motivated saliency and top-down processes like contextual priors. While recent work focusses on creating unified models, we investigate the degree in which two models influence visual search. In this paper we describe a method that makes individual models comparable which results in a scoring function. We apply this scoring function to bottom-up saliency maps and top-down contextual prior maps and analyze their predictive behavior of fixation points. We show that the first points participants fixate on are influenced by contextual information and once this information does not lead to the finding of the cued object salient information becomes more dominant.Summary: The project aimed to understand how do we search by saccadic targeting and the role of top-down and bottom-up attention modulations. The research hypothesis in this study was that eye movement is dynamically guided by both saliency and contextual cue in real scene saccadic targeting task, but the contextual cue plays a more dominant role. To test this hypothesis, we used computer vision algorithms and statistical models to analyze eye-tracking data from a real-world scene search experiment.References: 1. Mohr, Johannes, Julia Seyfarth, Andreas Lueschow, Joachim E. Weber, Felix A. Wichmann, and Klaus Obermayer. "BOiS—Berlin Object in Scene Database: Controlled Photographic Images for Visual Search Experiments with Quantified Contextual Priors." Frontiers in psychology 7 (2016): 749.
2. Torralba, Antonio, Aude Oliva, Monica S. Castelhano, and John M. Henderson. "Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search." Psychological review 113, no. 4 (2006): 766.
3. Itti, Laurent, Christof Koch, and Ernst Niebur. "A model of saliency-based visual attention for rapid scene analysis." IEEE Transactions on pattern analysis and machine intelligence 20, no. 11 (1998): 1254-1259.
4. Wolfe, Jeremy M., and Todd S. Horowitz. "Five factors that guide attention in visual search." Nature Human Behaviour 1, no. 3 (2017): 0058.
5. Koehler, Kathryn, Fei Guo, Sheng Zhang, and Miguel P. Eckstein. "What do saliency models pre
Report abuse »
Questions
Ask the author a question about this poster.
Ask a Question »

Creative Commons

Related Posters


Violating Gender Stereotypes
Lori Slivnik, Sabina Zakelj, Jasna Mikic, Aleksandra Kanjuo Mrcela, Jure Bon, Andraz Matkovic

Clinical pattern in electrophysiological variants of acute acquired polyneuropathies and their clinical outcome, a three years data
Naseebullah, Salman Mansoor, Azhar Saeed

Rare neurological deficit after electric shock A clinically diagnosed case report
Naseebullah, Salman Mansoor, Arsalan Ahmad, Shahid Shah

Poor Sleep Quality: "A wake up call for the elderly at a tertiary care centre in Pakistan"
Neha Siddiqui, Rahy Farooq, Salman Mansoor, Shoab Saadat, Maimoona Siddiqui, Zain Ahmad Javed, Arooj Fatimah Shah

3D Spheroid Culture Workflow using iPSC-induced Human Neurons
Kaiping Xu1, Zhong-Wei Du1, Anju Dang2, Ben Dungar1, Kurt Laha1