Multimodal Interaction for Mobile Robot Guidance

G. Iannizzotto, P. Lanzafame, F. La Rosa, and C. Costanzo (Italy)


Multimodal user interaction, autonomous robots, object recognition


Human-robot interaction is a very important issue for au tonomous robots, in particular when the targeted environ ment is more general than the strictly constrained, fully controlled factory automation. Modern robots are basically computers equipped with complex actuators and sensing devices: thus their native communication paradigm can be described in terms of tokens and numeric data and per formed through a keyboard and a monitor or similar de vices. We introduce multimodal human-computer interface and robot guidance platform allowing a human user to com municate with a robot, associate visual tags and spoken to kens to objects and ask the robot to perform actions. To illustrate our work, we implemented the presented system on a mobile robot platform, making it autonomous and able to perform navigational and object recognition and manipu lation tasks. Experimental results are shown and discussed.

Important Links:

Go Back