dc.description.abstract |
It has been shown that little if any attention is required for scene recognition (Li, VanRullen, & Koch, 2002). The absence of the role of attention in scene recognition, however, has been challenged by Cohen, Alvarez, and Nakayama (2011) showing that basic-level scene categorization and object identification performance degrade while simultaneously performing an attention-demanding task. Here, we use the same dual-task paradigm but in conjunction with a more reliable psychophysical method (Greene & Oliva, 2009a) to measure and compare scene recognition performance in different conditions of a broad range of scene recognition tasks, including detection, recognition of spatial structure and scene function, superordinate- and basic-level categorizations. Analysis of minimum duration at which the percentage of correct answers reached 75% showed a threshold increase in scene recognition performance from single- to dual-task conditions, suggesting a respective degradation in scene recognition performance. The performance of the secondary multiple-object tracking task also got worse in dual-task condition, implying that scene recognition and multiple-object tracking tasks may share an attentional capacity resource. A computational model was used to test whether a feedforward model lacking attentional modulation can account for our findings and the results showed that human scene recognition performance fits to the predictions of the model only in the dual-task conditions, where the attentional mechanisms are already occupied to facilitate scene recognition. For scene images categorized as “hard to be recognized”by the model in the single task blocks, however, the behavioral performance did not change, providing evidence for a potential attentional facilitation. |
|