We apologize for a recent technical issue with our email system, which temporarily affected account activations. Accounts have now been activated. Authors may proceed with paper submissions. PhDFocusTM
CFP last date
20 November 2024
Call for Paper
December Edition
IJCA solicits high quality original research papers for the upcoming December edition of the journal. The last date of research paper submission is 20 November 2024

Submit your paper
Know more
Reseach Article

Emotion Recognition in Music Signal using AANN and SVM

by N. J. Nalini, S. Palanivel
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 77 - Number 2
Year of Publication: 2013
Authors: N. J. Nalini, S. Palanivel
10.5120/13364-0958

N. J. Nalini, S. Palanivel . Emotion Recognition in Music Signal using AANN and SVM. International Journal of Computer Applications. 77, 2 ( September 2013), 7-14. DOI=10.5120/13364-0958

@article{ 10.5120/13364-0958,
author = { N. J. Nalini, S. Palanivel },
title = { Emotion Recognition in Music Signal using AANN and SVM },
journal = { International Journal of Computer Applications },
issue_date = { September 2013 },
volume = { 77 },
number = { 2 },
month = { September },
year = { 2013 },
issn = { 0975-8887 },
pages = { 7-14 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume77/number2/13364-0958/ },
doi = { 10.5120/13364-0958 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T21:49:11.329721+05:30
%A N. J. Nalini
%A S. Palanivel
%T Emotion Recognition in Music Signal using AANN and SVM
%J International Journal of Computer Applications
%@ 0975-8887
%V 77
%N 2
%P 7-14
%D 2013
%I Foundation of Computer Science (FCS), NY, USA
Abstract

The main objective of this work is to develop a music emotion recognition technique using Mel frequency cepstral coefficient (MFCC), Auto associative neural network (AANN) and support vector machine (SVM). The emotions taken are anger, happy, sad, fear, and neutral. Music database is collected at 44. 1 KHz with 16 bits per sample from various movies and websites related to music. For each emotion 15 music signals are recorded and each one is by 15sec duration. The proposed technique of music emotion recognition (MER) is done in two phases such, i) Feature extraction, and ii) Classification. Initially, music signal is given to feature extraction phase to extract MFCC features. Second the extracted features are given to Auto associative neural networks (AANN) and support vector machine (SVM) classifiers to categorize the emotions and finally their performance are compared. The experimental results show that MFCC with AANN classifier achieves a recognition rate of about 94. 4% and with SVM classifier of about 85. 0% thus outperforms SVM classifier.

References
  1. Yi-Hsuan Yang. Homer H. Chen. 2010. Music Emotion Recognition. CRC Press.
  2. Yi-Hsuan Yang, Yu-Ching Lin and Homer H. Chen, 2007. A Regression Approach to Music Emotion Recognition, IEEE.
  3. Thayer, R. E. 1989. The Biopsychology of Mood and Arousal, New York, Oxford University Press.
  4. Lin, Y. -C. Yang, Y. -H. and Chen, H. -H. 2009. Exploiting genre for music emotion classification, Proc. IEEE Int. Conf. Multimedia Expo. , 618-621.
  5. Y. -H. Yang, H. H. Chen, 2009, Music emotion ranking, In Proc. IEEE Int. Conf. of Acoust. , Speech, Signal Process. , 1657-1660.
  6. Eerola,T. Lartillot,O. Toiviainen,P. 2009. Prediction of multidimensional emotional ratings in music from audio using multivariate regression models, In Proc. Int. Conf. Music Inf. Retrieval, 621-626.
  7. Law, E. West, K. Mandel, M. Bay, M. and Downie, J. S. 2009. Evaluation of algorithms using games: The case of music annotation, , In Proc. Int. Conf. Music Inf. Retrieval, 387-392.
  8. F. Pachet and P. Roy, 2009, Improving multilabel analysis of music titles: A large-scale validation of the correction approach, IEEE Trans. on Audio, Speech, Lang. Processing. , 17(2), 335-343.
  9. Bokyung Sung, Myung-Bum Jung,Ilju Ko, 2008. A featured based Music content Recognition method using Simplified MFCC, Int. Journal of Principles and Applications of Information Science and Technology,2(1).
  10. Yongjin Wang, Ling Guan, 2008. Recognizing Human Emotional State from Audiovisual Signals, IEEE Transactions on Multimedia, 10(5), 936-946.
  11. Chuan-Yu Chang, Chi-Keng Wu, Chun-Yen Lo, Chi-Jane Wang, Pau-Choo Chung, 2011. Music Emotion Recognition with Consideration of Personal Preference, IEEE transactions on Multimedia.
  12. Bin Zhu, 2010. Music emotion recognition system based on improved GA-BP, IEEE transaction on Computer Design and Applications, Vol. 2, 409- 412.
  13. Byeong-jun Han, Seungmin Rho Roger,B Dannenberg, Eenjun Hwang, 2009. SMERS: Music Emotion Recognition Using Support Vector Regression, 10th International Society for Music Information Retrieval Conference (ISMIR 2009), 651-656.
  14. Marius Kaminskas, Francesco Ricci, 2012. Contextual music information retrieval and recommendation: State of the art and challenges, Computer science review6 (2012), 89 -11.
  15. Tao Li, MitsunoriOgihara, 2004. Content-Based Music Similarity Search and Emotion Detection", International conference on Acoustics, Speech and Signal Processing (ICASSP 2004), 705- 708.
  16. Youngmoo E. Kim, Erik M. Schmidt, Raymond Migneco, Brandon G. Morton Music Emotion Recognition: A State of The Art Review.
  17. Patil, K. J. Zope, P. H. Suralkar, S. R. 2012. Emotion detection from speech using MFCC and GMM.
  18. Deshuang Huang, 1999. The bottleneck behaviours in linear feed forward neural network classifiers and their breakthrough, Computer Science and Technology, 14(1): 34-43.
  19. Palanivel, S. 2004. Person authentication using speech, face and visual speech, Ph. D. Thesis. Department of Computer Science and Engineering. Indian Institute of Technology, Madras.
  20. Bianchini, M. Frasconi, P. Gori, M. 1995. Learning in multilayered networks used as autoassociators, IEEE Transaction on neural networks, 6, 512-515.
  21. Kishore, S. P. Yegnanarayana, B. 2001. Online text independent speaker verification system using autoassociative neural network models. In proc. International joint conference on neural networks, Washington, DC. USA.
  22. Yegnanarayana, B. Kishore, S. P. 2002. AANN: an alternative to GMM for pattern recognition. Neural networks, 15, 459-569.
  23. Vapnik, V. 1998. Statistical learning theory. New York: John Wiley and Sons.
  24. Xu, C. Maddage,N. C. & Shao, X. 2005. Automatic music classification and summarization. IEEE Transactions on Speech and Audio Processing, 13(3), 441–450.
  25. Dhanalakshmi, P. Palanivel, S. Ramalingam, V. 2009. Classification of audio signals using SVM and RBFNN, Expert systems with applications, 36 (3. part 2), 6069-6075.
  26. Yashpalsing Chavhan, Dhore, M. L. Pallavi Yesaware, 2010. Speech Emotion Recognition Using Support Vector Machine, International Journal of Computer Applications, vol. 1, 6-9.
Index Terms

Computer Science
Information Sciences

Keywords

Mel frequency cepstral coefficients Auto associative neural networks Support vector machine Music emotion recognition