Multimodal Human-Computer Interaction for Immersive Visualization: Integrating Speech-Gesture Recognitions and Augmented Reality for Indoor Environments

A.M. Malkawi and R.S. Srinivasan (USA)


Multimodal HCI, speech recognition, gesture recognition,CFD, building simulation.


This paper presents a multimodal HCI-based interactive, immersive Computational Fluid Dynamics (CFD) visualization model for indoor environments. In this model, speech and gesture recognition mechanisms were integrated with an immersive Augmented Reality (AR) system for efficient data exploration. The model includes four components: (a) wireless sensor component to monitor the environmental boundary conditions as they change and transmit for further CFD analysis; (b) CFD analysis component to perform simulation with new boundary conditions; (c) multimodal HCI component to aid in data manipulation through integrating both speech and gesture recognition mechanisms, and (d) AR visualization component to track user's movement in real time and visualize CFD datasets using HMD and magnetic trackers. Such a model will enable effective manipulation and exploration of indoor thermal CFD data in real-time, and will dramatically enhance the way buildings are experienced, managed and operated.

Important Links:

Go Back