Minhua Zheng, Peter X. Liu, and Max Q.-H. Meng


  1. [1] K. Nakamura, Y. Yoshida, E. Sato-Shimokawara, and T.Yamaguchi, Service robot system based on networked robotsfor using personal attribute and to get preference attribute,International Journal of Robotics and Automation, 24(3), 2009,206–3261.
  2. [2] C. Breazeal, Social interactions in HRI: The robot view, IEEETransactions on Systems, Man, and Cybernetics, Part C(Applications and Reviews), 34(2), 2004, 181–186.
  3. [3] A.M. Khamis, M.S. Kamel, and M.A. Salichs, Human–robot interfaces for social interaction. International Journal of Roboticsand Automation, 22(3), 2007, 215–221.
  4. [4] S. Modi, Y. Lin, L. Cheng, G. Yang, L. Liu, and W. Zhang,A socially inspired framework for human state inference using expert opinion integration, IEEE/ASME Transactions onMechatronics, 16(5), 2011, 874–878.
  5. [5] A. Mehrabian, Silent Messages (Belmont, CA: Wadsworth,1971).
  6. [6] C.L. Lisetti, S.M. Brown, K. Alvarez, and A.H. Marpaung, Asocial informatics approach to human–robot interaction witha service social robot, IEEE Transactions on Systems, Man,and Cybernetics, Part C (Applications and Reviews), 34(2),2004, 195–209.
  7. [7] C. Breazeal, Function meets style: Insights from emotiontheory applied to HRI, IEEE Transactions on Systems, Man,and Cybernetics, Part C (Applications and Reviews), 34(2),2004, 187–194.
  8. [8] L. Xue, C.J. Liu, Y. Lin, and W.J. Zhang. On redundanthuman–robot interface: Concept and design principle, Proc.IEEE Int. Conf. on Advanced Intelligent Mechatronics, Busan,2015, 287–292.
  9. [9] E.A. Sisbot and R. Alami, A human-aware manipulationplanner, IEEE Transactions on Robotics, 28(5), 2012, 1045–1057.
  10. [10] V. Srinivasan, C.L. Bethel, and R.R. Murphy, Evaluation ofhead gaze loosely synchronized with real-time synthetic speechfor social robots, IEEE Transactions on Human-MachineSystems, 44(6), 2014, 767–778.
  11. [11] M. Zheng, A. Moon, E.A. Croft, and M.Q.-H. Meng, Impacts ofrobot head gaze on robot-to-human handovers, InternationalJournal of Social Robotics, 7(5), 2015, 783–798.
  12. [12] T.P. Spexard, M. Hanheide, and G. Sagerer, Human-orientedinteraction with an anthropomorphic robot, IEEE Transactionson Robotics, 23(5), 2007, 852–862.
  13. [13] T. Kanda, M. Shiomi, Z. Miyashita, et al., A communicationrobot in a shopping mall, IEEE Transactions on Robotics,26(5), 2010, 897–913.
  14. [14] M. Zecca, S. Roccella, M. Carrozza, et al., On the developmentof the emotion expression humanoid robot WE-4RII with RCH-1, Proc. 4th IEEE/RAS Int. Conf. on Humanoid Robots, SantaMonica, CA, 2004, 235–252.
  15. [15] C. Breazeal, A. Brooks, J. Gray, et al., Tutelage and collaboration for humanoid robots, International Journal of HumanoidRobotics, 1(2), 2004, 315–348.
  16. [16] H. Kim, S.S. Kwak, and M. Kim, Personality design of sociablerobots by control of gesture design factors, Proc. 17th IEEEInt. Symp. on Robot and Human Interactive Communication,Munich, 2008, 494–499.
  17. [17] R. Meena, K. Jokinen, and G. Wilcock, Integration of gesturesand speech in human–robot interaction, Proc. IEEE 3rd Int.Conf. on Cognitive Infocommunications, Kosice, 2012, 673–678.
  18. [18] V. Ng-Thow-Hing, P. Luo, and S. Okita, Synchronized gesture and speech production for humanoid robots, Proc. 2010IEEE/RSJ Int. Conf. on Intelligent Robots and Systems,Taipei, 2010, 4617–4624.
  19. [19] M. Salem, S. Kopp, I. Wachsmuth, et al., Generation and evaluation of communicative robot gesture, International Journalof Social Robotics, 4(2), 2012, 201–217.
  20. [20] M. Salem, S. Kopp, and F. Joublin, Closing the loop: Towardstightly synchronized robot gesture and speech, Proc. 5th Int.Conf. on Social Robotics, Bristol, 2013, 381–391.
  21. [21] A. Moon, D. Troniak, B. Gleeson, et al., Meet me whereI’m gazing: How shared attention gaze affects human–robothandover timing, Proc. 9th ACM/IEEE Int. Conf. on Human–robot Interaction, Bielefeld, 2014, 334–341.
  22. [22] H. Admoni, A. Dragan, S. Srinivasa, and B. Scassellati, Deliberate delays during robot-to-human handovers improve compliance with gaze communication, Proc. 9th ACM/IEEE Int.Conf. on Human–robot Interaction, Bielefeld, 2014, 49–56.
  23. [23] N. Kirchner, A. Alempijevic, and G. Dissanayake, Nonverbalrobot-group interaction using an imitated gaze cue, Proc. 6thACM/IEEE Int. Conf. on Human–robot Interaction, Lausanne,2011, 497–504.
  24. [24] A. Beck, L. Ca˜namero, A. Hiolle, et al., Interpretation ofemotional body language displayed by a humanoid robot:A case study with children, International Journal of SocialRobotics, 5(3), 2013, 325–334.
  25. [25] S.-J. Lee, C.-Y. Jung, B.-S. Yoo, et al., Arm gesture generationof humanoid robot Mybot-KSR for human robot interaction,Proc. 16th FIRA RoboWorld Congress, Kuala Lumpur, 2013,36–48.
  26. [26] B. Gleeson, K. MacLean, A. Haddadi, et al., Gestures forindustry: Intuitive human–robot communication from humanobservation, Proc. 8th ACM/IEEE Int. Conf. on Human–robotInteraction, Tokyo, 2013, 349–356.
  27. [27] T. Ende, S. Haddadin, S. Parusel, et al., A human-centeredapproach to robot gesture based communication within collaborative working processes, Proc. 2011 IEEE/RSJ Int. Conf.on Intelligent Robots and Systems, San Francisco, CA, 2011,3367–3374.
  28. [28] A. Kendon, Gesture: Visible action as utterance (Cambridge,England: Cambridge University Press, 2004).
  29. [29] D. McNeill, Gesture and thought (Chicago, Illinois, UnitedStates: University of Chicago Press, 2008).
  30. [30] P. Ekman and W.V. Friesen, Hand movements, Journal ofCommunication, 22(4), 1972, 353–374.
  31. [31] P. Ekman and W.V. Friesen, The repertoire of nonverbalbehaviour: Categories, origins, usage, and coding, Semiotica,1(1), 1969, 49–98.
  32. [32] H.G. Johnson, P. Ekman, and W.V. Friesen, Communicativebody movements: American emblems, Semiotica, 15(4), 1975,335–353.
  33. [33] M. S. Remland, Nonverbal communication in everyday life,3rd ed. (London, England: Pearson, 2008).
  34. [34] D. Matsumoto and H.C. Hwang, Cultural similarities and differences in emblematic gestures, Journal of Nonverbal Behavior, 37(1), 2013, 1–27.
  35. [35] A. Kranstedt, S. Kopp, and I. Wachsmuth, MURML: A multimodal utterance representation markup language for conversational agents, AAMAS’02 Workshop on Embodied Conversational Agents – Let’s Specify and Evaluate Them!, Bologna,Italy, 2002, 1–8.
  36. [36] J. Li and M. Chignell, Communication of emotion in socialrobots through simple head and arm movements, InternationalJournal of Social Robotics, 3(2), 2011, 125–142.
  37. [37] M. Lohse, R. Rothuis, J. Gallego-P´erez, et al., Robot gesturesmake difficult tasks easier: The impact of gestures on perceivedworkload and task performance, Proc. 32nd annual ACM Conf.on Human Factors in Computing Systems, Toronto, 2014,1459–1466.
  38. [38] G. Hoffman and C. Breazeal, Cost-based anticipatory actionselection for human–robot fluency, IEEE Transactions onRobotics, 23(5), 2007, 952–961.
  39. [39] C.J. Hayes, C.R. Crowell, and L.D. Riek, Automatic processingof irrelevant co-speech gestures with human but not robotactors, Proc. 8th ACM/IEEE Int. Conf. on Human–robotInteraction, Tokyo, 2013, 333–340.
  40. [40] C. Pelachaud, Modelling multimodal expression of emotion ina virtual agent, Philosophical Transactions of the Royal SocietyB: Biological Sciences, 364(1535), 2009, 3539–3548.
  41. [41] C. Pelachaud, Studies on gesture expressivity for a virtualagent, Speech Communication, 51(7), 2009, 630–639.
  42. [42] S. Kopp and I. Wachsmuth, Synthesizing multimodal utterances for conversational agents, Computer Animation andVirtual Worlds, 15(1), 2004, 39–52.
  43. [43] A. Beck, L. Ca˜namero, and K.A. Bard, Towards an affect spacefor robots to display emotional body language, Proc. 19th IEEEInt. Symp. on Robot and Human Interactive Communication,Viareggio, 2010, 464–469.
  44. [44] K. Nickel, E. Seemann, and R. Stiefelhagen, 3D-tracking of headand hands for pointing gesture recognition in a human–robotinteraction scenario, Proc. 6th IEEE Int. Conf. on AutomaticFace and Gesture Recognition, Seoul, 2004, 565–570.
  45. [45] D.B. Givens, The nonverbal dictionary of gestures, signs &body language cues, accessed October 3rd, 2016,
  46. [46] A. Field, Discovering statistics using SPSS, 3rd ed. (SAGEPublications, 2009)
  47. [47] B. Tondu and N. Bardou, A new interpretation of Mori’s un-canny valley for future humanoid robots, International Journalof Robotics and Automation, 26(3), 2011, 337–348.
  48. [48] T. Nomura, T. Kanda, T. Suzuki, and K. Kato, Prediction ofhuman behaviour in human–robot interaction using psycho-logical scales for anxiety and negative attitudes toward robots,IEEE Transactions on Robotics, 24(2), 2008, 442–451.
  49. [49] B. Mutlu, J. Forlizzi, and J. Hodgins, A storytelling robot:Modeling and evaluation of human-like gaze behaviour, Proc.6th IEEE-RAS Int. Conf. on Humanoid Robots, Genoa, 2006,518–523.

Important Links:

Go Back