CFP last date
20 January 2025
Reseach Article

Telugu based Emotion Recognition System using Hybrid Features

by J. Naga Padmaja, R. Rajeswara Rao
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 182 - Number 37
Year of Publication: 2019
Authors: J. Naga Padmaja, R. Rajeswara Rao
10.5120/ijca2019918359

J. Naga Padmaja, R. Rajeswara Rao . Telugu based Emotion Recognition System using Hybrid Features. International Journal of Computer Applications. 182, 37 ( Jan 2019), 9-16. DOI=10.5120/ijca2019918359

@article{ 10.5120/ijca2019918359,
author = { J. Naga Padmaja, R. Rajeswara Rao },
title = { Telugu based Emotion Recognition System using Hybrid Features },
journal = { International Journal of Computer Applications },
issue_date = { Jan 2019 },
volume = { 182 },
number = { 37 },
month = { Jan },
year = { 2019 },
issn = { 0975-8887 },
pages = { 9-16 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume182/number37/30304-2019918359/ },
doi = { 10.5120/ijca2019918359 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T01:13:32.954974+05:30
%A J. Naga Padmaja
%A R. Rajeswara Rao
%T Telugu based Emotion Recognition System using Hybrid Features
%J International Journal of Computer Applications
%@ 0975-8887
%V 182
%N 37
%P 9-16
%D 2019
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Emotion recognition from speech is experiencing different research applications. It is becoming one of the tool for analysis of health condition of the speaker. In this work, the emotions such as anger, fear, happy, neutral are considered for speech emotion algorithm design. A database built by IITKGP is used for emotion recognition. For any recognition, feature extraction and pattern classification are the important tasks. In this work the features considered are Mel Frequency Cepstral Coefficients (MFCC), Pitch chroma, prosodic are used. Hidden Markov Models (HMMs ) are used to for modeling and identify the emotions. In this research work, the database considered for emotion recognition is taken in different combinations such as male training- female testing, male training-male testing, female training- female testing, female training-male testing. All these combinations are trained and tested with i-vector with GMM, linear Hidden Markov Models (HMMs) and Ergodic Hidden Markov Models(EHMMs) In almost all the cases, Ergodic Hidden Markov Models (EHMMs) method has shown significant improvement in recognition accuracy than i-vector with GMM and Linear Hidden Markov Models(HMMs)

References
  1. Athanaselis et al.2005, “ASR for Emotional Speech: clarifying the issues and enhancing performance”, Neural Netw, 18:437-444.
  2. Iker Luengo et al., 2008, “Text independent speaker identification in multilingual environments”, lrec2008, pp:1814-1817.
  3. Ch.Srinivasa Kumar et al. “Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm”, Vol.3, IJCSE,2011, pp:2942-2954.
  4. Prerna Puri, “Detailed analysis of Speaker Recognition System and use of MFCCs for recognition”,Vol.3,2013, IOSR journal of Engineering, pp:32-36.
  5. Mannepalli, K., Sastry, P.N. & Suman, M. Int J Speech Technol (2016) 19: 779. https://doi.org/10.1007/s10772-016-9368-y
  6. Laura Caponetti,Cosimo Alessandro Buscicchio and Giovanna Castellano “Biologically inspired emotion recognition from Speech” Journal on Advances in Signal Processing, 2011.
  7. Yazid Attabi and Pierre Dumouchel, “Anchor Models for Emotion Recognition from Speech” IEEE Transaction on Affective Computing, Vol. 4, No. 3, pp: 280-290, 2013.
  8. L. Zão, “Time-Frequency Feature and AMS-GMM Mask for Acoustic Emotion Classification”, IEEE Signal Processing Letters, Vol. 21, No. 5, pp: 620-624,May, 2014.
  9. S. Selva Nidhyananthan, R. Shantha Selva Kumari, L. Bala Manikandan & P. Suresh, “Realiztion of emotions in speech using prosodic and articulation features” International Journal of Advanced Electrical and Electronics Engineering.Vol.2 Issue no.2, pp.83-86, 2013.
  10. MayankBhargava and Tim Polzehl “Improving Automatic Emotion Recognition from speech using Rhythm and Temporal feature” ICECIT published by Elsevier. Vol. 2 Issue no.2, pp.139-147, 2012.
  11. Yasmine Maximos and David Suendermann- Oeft,“Emotion recognition from children’s speech” Speaker Emotion Challenge at Interspeech2013.
  12. Ankur Sapra, Nikhil Panwar and Sohan Panwar, “emotion recognition from speech”,international journal of emerging technology and advanced engineering, volume 3, issue 2, pp. 341-345 February 2013.
  13. Vaishali M. Chavan and V.V. Gohokar “Speech Emotion Recognition by using SVM classifier” International Journal of Engineering and Advanced Technology (IJEAT), pp: 2249 – 8958,Volume-1, Issue-5, June 2012.
  14. Dellaert et al., “ Recognizing Emotion in Speech”, ICSLP, 1996.
  15. Lee CM, Narayanan, “ Toward detecting emotion in spoken Dialogs”, IEEE Trans Speech Audio Process, 13 (2): 293-303.
  16. Murray, Arnott, “Implementation and Testing of a system by producing emotion by a rule in synthetic speech”, Speech Communication,16, 369-390.
  17. McGilloway et al. 2000, Rao et al. 2010,“ Approaching automatic recognition of emotion from voice”, ISCA workshop on speech and emotion.
  18. Gish, H., Krasner, M., Russell, W., and Wolf, J., “Methods and experiments for text-independent speaker recognition over telephone channels,” Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol. 11, pp. 865-868, Apr. 1986.
  19. Reynolds, D. A., and Rose, R. C., “Robust Text-Independent Speaker Identification using Gaussian Mixture Models’’ IEEE-Transactions on Speech and Audio Processing, vol. 3, no. 1, pp. 72-83, Jan-1995.
  20. “Emotion Recognition using Prosody Features”, Anne KR Kuchibotla, 2015.
  21. Diana Torres-Boza, Meshia Cedric Oveneke, et al. “Hierarchical Sparse Coding Framework for Speech Emotion Recognition” Speech Communication(2018), doi:10.1016 /j.specom. 18.01.006.
  22. Jainath Yadav, ,Md. Shah Fahad et al. “Epoch detection from emotional speech signal using zero time windowing” Speech Communication 96 (2018) 142–149.
  23. Zhen-Tao Liu, et al. “Speech emotion recognition based on feature selection and extreme learning machine decisiontree”, Neurocomputing (2017),doi:10.1016/ j.neucom.2017.07.050
  24. ShaolingJing, XiaMao and LijiangChen. “Prominence features: Effective emotional features for speech emotion recognition”. Digital SignalProcessing 017, doi.org /10.1016 /j.dsp. 2017.10.016.
  25. M. Srikanth, D. Pravena, and D. Govind. “Tamil Speech Emotion Recognition Using Deep Belief Network (DBN)” Advances in Signal Processing and Intelligent Recognition Systems, Advances in Intelligent Systems and Computing 678, 2018. DOI 10.1007/978-3-319-67934-1 29
  26. W. Dai, D. Han, Y. Dai, and D. Xu, “Emotion Recognition and Affective Computing on Vocal Social Media,”Inf. Manag., Feb. 2015.
  27. H. Cao, R. Verma, and A. Nenkova, “Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech,” Comput. Speech Lang., vol. 28, no. 1,pp. 186–202, Jan. 2015.
  28. Peeters, G., “Chroma-based Estimation of Musical Key from Audio-Signal Analysis”, In Proceedings of the 7th International Conference on Music Information Retrieval", Victoria (BC), Canada, 2006.
  29. Chin Kim On Paulraj M. Pandiyan Sazali Yaacob Azali Saudi, "Mel-Frequency Cepstral Coefficient Analysis in Speech Recognition", in proceedings of International Conference on Computing & Informatics, pp. 1 - 5, 2006.
  30. Mannepalli, K., Sastry, P.N. & Suman, M. Int J Speech Technol (2016) 19: 87. https://doi.org/10.1007/s10772-015-9328-y.
Index Terms

Computer Science
Information Sciences

Keywords

Emotion Specific I-Vector Gaussian Mixture Models Prosody Features Spectral Features HMM EHMM.