Task Modulated Active Vision For Advanced Human-Robot Interaction

Type Article
Original languageEnglish
Number of pages23
JournalInternational Journal of Humanoid Robotics
Volume9
Issue number3
DOI
Publication statusPublished - Sep 2012
Links
Permanent link
View graph of relations
Citation formats

Abstract

Eye fixation and gaze fixation patterns in general play an important part when humans interact with each other. Also, gaze fixation patterns of humans are highly determined by the task they perform. Our assumption is that meaningful humanrobot interaction with robots having active vision components (such a humanoids) is highly supported if the robot system is able to create task modulated fixation patterns. We present an architecture for a robot active vision system equipped with one manipulator where we demonstrate the generation of task modulated gaze control, meaning that fixation patterns are in accordance with a specific task the robot has to perform. Experiments demonstrate different strategies of multi-modal task modulation for robotic active vision where visual and nonvisual features (tactile feedback) determine gaze fixation patterns. The results are discussed in comparison to purely saliency based strategies toward visual attention and gaze control. The major advantages of our approach to multi-modal task modulation is that the active vision system can generate, first, active avoidance of objects, and second, active engagement with objects. Such behaviors cannot be generated by current approaches of visual attention which are based on saliency models only, but they are important for mimicking human-like gaze fixation patterns.

Keywords

  • active vision, multi-modal visual attention, gaze fixation patterns, inhibition of return, human-robot interaction