International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 96 - Number 19 |
Year of Publication: 2014 |
Authors: Hima Vadapalli |
10.5120/16904-6971 |
Hima Vadapalli . Facial Action Unit Recognition from Video Streams with Recurrent Neural Networks. International Journal of Computer Applications. 96, 19 ( June 2014), 31-39. DOI=10.5120/16904-6971
Facial expressions are one of the parameters for accessing individual behavioral processes. Their recognition and verification can be framed as the identification of states of dynamical systems generated by physiological processes. Whereas a snap shot of a dynamical system gives information about its current state, a time series of past states captures its trajectory in state space. The description and recognition of facial expressions using atomic muscle movements, so-called action units provide an extensive framework. The temporal modeling and recognition of these muscle movements promises a broader and more generic approach for recognizing subtle changes on the facial region. This paper proposes the use of recurrent neural networks for modeling facial action unit activity. Recurrent neural networks are able to model actions based on their previous and current states, unlike other dynamic classifiers such as hidden Markov models. A detailed comparative analysis with the recognition performance of a static classifier such as support vector machines suggests that recurrent neural networks gain more knowledge about the action unit activation when presented with a sequence of images. On average our model achieved a positive hit rate of 85. 8% for upper face action units and 84. 9% for lower face action units.