We apologize for a recent technical issue with our email system, which temporarily affected account activations. Accounts have now been activated. Authors may proceed with paper submissions. PhDFocusTM
CFP last date
20 December 2024
Reseach Article

An Accurate Algorithm for Generating a Music Playlist based on Facial Expressions

by Anukriti Dureha
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 100 - Number 9
Year of Publication: 2014
Authors: Anukriti Dureha
10.5120/17557-8163

Anukriti Dureha . An Accurate Algorithm for Generating a Music Playlist based on Facial Expressions. International Journal of Computer Applications. 100, 9 ( August 2014), 33-39. DOI=10.5120/17557-8163

@article{ 10.5120/17557-8163,
author = { Anukriti Dureha },
title = { An Accurate Algorithm for Generating a Music Playlist based on Facial Expressions },
journal = { International Journal of Computer Applications },
issue_date = { August 2014 },
volume = { 100 },
number = { 9 },
month = { August },
year = { 2014 },
issn = { 0975-8887 },
pages = { 33-39 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume100/number9/17557-8163/ },
doi = { 10.5120/17557-8163 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T22:29:34.654935+05:30
%A Anukriti Dureha
%T An Accurate Algorithm for Generating a Music Playlist based on Facial Expressions
%J International Journal of Computer Applications
%@ 0975-8887
%V 100
%N 9
%P 33-39
%D 2014
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Manual segregation of a playlist and annotation of songs, in accordance with the current emotional state of a user, is labor intensive and time consuming. Numerous algorithms have been proposed to automate this process. However the existing algorithms are slow, increase the overall cost of the system by using additional hardware (e. g. EEG systems and sensors) and have less accuracy. This paper presents an algorithm that automates the process of generating an audio playlist, based on the facial expressions of a user, for rendering salvage of time and labor, invested in performing the process manually. The algorithm proposed in this paper aspires to reduce the overall computational time and the cost of the designed system. It also aims at increasing the accuracy of the designed system. The facial expression recognition module of the proposed algorithm is validated by testing the system against user dependent and user independent dataset. Experimental results indicate that the user dependent results give 100% accuracy, while user independent results for joy and surprise are 100 %, but for sad, anger and fear are 84. 3 %, 80 % and is 66% respectively. The overall accuracy of the emotion recognition algorithm, for user independent dataset is 86%. In audio, 100 % recognition rates are obtained for sad, sad-anger and joy-anger but for joy and anger, recognition rates obtained are 95. 4% and 90 % respectively. The overall accuracy of the audio emotion recognition algorithm is 98%. Implementation and testing of the proposed algorithm is carried out using an inbuilt camera. Hence, the proposed algorithm reduces the overall cost of the system successfully. Also, on average, the proposed algorithm takes 1. 10 sec to generate a playlist based on facial expression. Thus, it yields better performance, in terms of computational time, as compared to the algorithms in the existing literature.

References
  1. Alvin I. Goldmana, b. Chandra and SekharSripadab, "Simulationist models of face-based emotion recognition".
  2. A. habibzad, ninavin, Mir kamalMirnia," A new algorithm to classify face emotions through eye and lip feature by using particle swarm optimization. "
  3. Byeong-jun Han, Seungmin Rho, Roger B. Dannenberg and Eenjun Hwang, "SMERS: music emotion recognition using support vector regression", 10thISMIR , 2009.
  4. Chang, C. Hu, R. Feris, and M. Turk, "Manifold based analysis of facial expression," Image Vision Comput ,IEEE Trans. Pattern Anal. Mach. Intell. vol. 24, pp. 05–614, June 2006.
  5. Carlos A. Cervantes and Kai-Tai Song , "Embedded Design of an Emotion-Aware Music Player", IEEE International Conference on Systems, Man, and Cybernetics, pp 2528-2533 ,2013.
  6. Fatma Guney, "Emotion Recognition using Face Images", Bogazici University, Istanbul, Turkey 34342.
  7. Jia-Jun Wong, Siu-Yeung Cho," Facial emotion recognition by adaptive processing of tree structures".
  8. K. Hevener,"The affective Character of the major and minor modes in music", The American Journal of Psychology,Vol 47(1) pp 103-118,1935.
  9. Kuan-Chieh Huang, Yau-Hwang Kuo, Mong-Fong Horng ,"Emotion Recognition by a novel triangular facial feature extraction method".
  10. Michael lyon and Shigeru Akamatsu, "Coding Facial expression with Gabor wavelets. ", IEEE conf. on Automatic face and gesture recognition, March 2000.
  11. P. Ekman, W. V. Friesen and J. C. Hager, "The Facial Action Coding System: A Technique for the Measurement of Facial Movement",2002.
  12. Simon baker and Iain Matthews," Lucas-Kanade 20 Years On: A Unifying Framework", International Journal of Computer Vision,,vol 56(3), pp 221–255, 2004.
  13. Spirosv. Ionnau, Amaryllis T. Raouzaiou, VazilisA. tzouvaras,"Emotion Recognition though facial expression analysis based on neurofuzzy network".
  14. Samuel Strupp, Norbert Schmitz, and KarstenBerns, "Visual-Based Emotion Detection for Natural Man-Machine Interaction".
  15. Russell, "A circumplex model of affect", Journal of Personality and Social Psychology Vol-39(6), pp1161–1178 , 1980.
  16. Renuka R. Londhe, Dr. Vrushshen P. Pawar, "Analysis of Facial Expression and Recognition Based On Statistical Approach", International Journal of Soft Computing and Engineering (IJSCE) Volume-2, May 2012.
  17. S. Dornbush, K. Fisher, K. McKay, A. Prikhodko and Z. Segall "Xpod- A Human Activity and Emotion Aware Mobile Music Player", UMBC Ebiquity, November 2005.
  18. Sanghoon Jun, Seungmin Rho, Byeong-jun Han and Eenjun Hwang, "A fuzzy inference-based music emotion recognition system",VIE,2008.
  19. Thayer " The biopsychology of mood & arousal", Oxford University Press ,1989.
  20. Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang, "A survey of affect recognition methods: Audio, visual, and spontaneous expressions," IEEE. Transaction Pattern Analysis, vol 31, January 2009.
Index Terms

Computer Science
Information Sciences

Keywords

Audio Emotion Recognition Music Information Retrieval Facial Expression Recognition Music Recommendation systems Audio Feature Extraction.