A Robust Lip Tracking System for the Acoustic to Articulatory Inversion

J. Chen, Y. Laprie, and M.-O. Berger (France)

Keywords

lip tracking, acoustic-to-articulatory inversion, andrecovery

Abstract

The acoustic to articulatory inversion of speech which refers to the mapping from the acoustic signal to the articulatory, is an interesting problem. Given the acoustic signal, the recovery of the articulatory state is considered difficult. The reason is the "one-to-many" nature of the acoustic-to-articulatory inversion problem: a given articulatory state has always only one acoustic realization but an acoustic signal can be the outcome of more than one articulatory states. In order to solve the "one-to many" problem of the inversion, visual information complementary to acoustic signal is used. Hence, a robust lip tracking system to provide visual information (such as the width and height of mouth) for the acoustic-to articulatory inversion is developed in this paper. The proposed approach uses a combination of motion, color and structure information of the mouth area to track lip feature points. This technique is designed to be effective and robust. It has the advantages to detect the lip feature points automatically and recover the feature points lost during tracking process. Encouraging results have been obtained using the proposed approach.

Important Links:



Go Back


IASTED
Rotating Call For Paper Image