International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 100 - Number 6 |
Year of Publication: 2014 |
Authors: S. Saravanan, S. Palanivel, M. Balasubramanian |
10.5120/17527-8097 |
S. Saravanan, S. Palanivel, M. Balasubramanian . Facial Expression and Visual Speech based Person Authentication. International Journal of Computer Applications. 100, 6 ( August 2014), 8-15. DOI=10.5120/17527-8097
Most of the person authentication system lacks perfection due to face poses and illumination variation. One more problem in person authentication is the selection of source for feature generation. In this work, videos have been recorded with variations in poses. The videos have been taken in normal office lighting condition. Videos of persons are taken in three situations. First when faces are kept normal, then with smile facial expression and third during speech. Second session of video recording is done similar to the first session with a time gap. This work employs a powerful method to identify the video frames which have single face without pose and excerpt necessary number of frames from the video. Methods are used to automatically identify the mouth area. Features are generated from mouth area in such a way to overcome the issues due to illumination variation. The features created from the first and second session are used to train and test respectively a neural network for person authentication. Among several neural network models, auto associative neural network is used due to its features distribution capturing ability. Person authentication capacity is compared while features created from normal face, features created from smile expression, and visual speech. Equal error rate is used as a tool to compare the capacity of person authentication. The outcome of this project is that while intensity based feature vectors like this is used for person authentication, the visual speech is more efficient than normal face, and face with smile expression performs the lowest.