International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 96 - Number 2 |
Year of Publication: 2014 |
Authors: Elham S. Salama, Reda A. El-khoribi, Mahmoud E. Shoman |
10.5120/16770-6337 |
Elham S. Salama, Reda A. El-khoribi, Mahmoud E. Shoman . Audio-Visual Speech Recognition for People with Speech Disorders. International Journal of Computer Applications. 96, 2 ( June 2014), 51-56. DOI=10.5120/16770-6337
Speech recognition of disorder people is a difficult task due to the lack of motor-control of the speech articulators. Multimodal speech recognition can be used to enhance the robustness of disordered speech. This paper introduces an automatic speech recognition system for people with dysarthria speech disorder based on both speech and visual components. The Mel-Frequency Cepestral Coefficients (MFCC) is used as features representing the acoustic speech signal. For the visual counterpart, the Discrete Cosine Transform (DCT) Coefficients are extracted from the speaker's mouth region. Face and mouth regions are detected using the Viola-Jones algorithm. The acoustic and visual input features are then concatenated on one feature vector. Then, the Hidden Markov Model (HMM) classifier is applied on the combined feature vector of acoustic and visual components. The system is tested on isolated English words spoken by disorder speakers from UA-Speech data. Results of the proposed system indicate that visual features are highly effective and can improve the accuracy to reach 7. 91% for speaker dependent experiments and 3% for speaker independent experiments.