Abstract:
Biological vision systems explore their environment by allocating their resources to interesting parts of a scene, using both physical and mental attention mechanisms. The result of this active and attentive vision behavior is a sequence of images obtained from different spatial locations at different times. However, temporal processes and integration mechanisms in the brain enable us to interpret this information and perceive a stable image of the environment. While models of such attention and perception mechanisms are invaluable to understand human vision, they are also increasingly used and improved by robotics and artificial intelligence researchers to achieve human-like performance. In a similar attempt, we propose a new and complete model of active vision behavior, based on confirmed biological evidence where available. The model consists of an attention system, temporal image sequence processing algorithms and an integrative visual memory. All components of the model are implemented on our mobile robot APES. Gaze control, sequence based scene recognition and visual integration tasks are assumed during experiments. Results of gaze control experiments clearly demonstrate a human-like selective attention behavior, which can be fully controlled by a number of parameters. In recognition and integration tasks, simple and complex scenes were successfully modeled and classified. Furthermore, our work on attentional image sequences raised a number of interesting questions, some of which have been answered in this thesis.