CFP last date
20 December 2024
Reseach Article

Architecture for Playing Songs using Audio Content Analysis according to First Chosen Song

by Chirag Juyal, R H Goudar
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 53 - Number 16
Year of Publication: 2012
Authors: Chirag Juyal, R H Goudar
10.5120/8506-2313

Chirag Juyal, R H Goudar . Architecture for Playing Songs using Audio Content Analysis according to First Chosen Song. International Journal of Computer Applications. 53, 16 ( September 2012), 18-22. DOI=10.5120/8506-2313

@article{ 10.5120/8506-2313,
author = { Chirag Juyal, R H Goudar },
title = { Architecture for Playing Songs using Audio Content Analysis according to First Chosen Song },
journal = { International Journal of Computer Applications },
issue_date = { September 2012 },
volume = { 53 },
number = { 16 },
month = { September },
year = { 2012 },
issn = { 0975-8887 },
pages = { 18-22 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume53/number16/8506-2313/ },
doi = { 10.5120/8506-2313 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T20:54:16.447289+05:30
%A Chirag Juyal
%A R H Goudar
%T Architecture for Playing Songs using Audio Content Analysis according to First Chosen Song
%J International Journal of Computer Applications
%@ 0975-8887
%V 53
%N 16
%P 18-22
%D 2012
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Music is one of the basic human needs for recreation and entertainment. As song files are digitalized now a days and options for random play of songs are common in use as shuffle. Shuffle randomly picks a song and has a tendency to stick around mostly played songs. Thus there is a need to retrieve and recommend songs on the basis of one's mood just by his/her first made choice. In this paper we will present a well-defined architecture to play songs on basis of song chosen using audio content analysis and audio detector. In audio content analysis we will use features such as intensity, timbre and rhythm to map related feature music. Finally, audio detector will detect and play similar featured songs.

References
  1. Dan Liu, Lie Lu and Hong Jiang Zhang, Automatic Mood Detection from Acoustic Music Data, the Johns Hopkins University, ISMIR, 2003.
  2. Yu-Hao Chen, Jin-Hau Kuo, Wei-Ta Chu, and Ja-Ling Wu, Movie Emotional Event Detection based on Music Mood and Video Tempo, IEEE,2006.
  3. Tao Li and Mitsunori Ogihara, Detecting Emotion in Music, the Johns Hopkins University, ISMIR, 2003.
  4. Owen Craigie Meyers, A Mood-Based Music Classification and Exploration System, MS thesis, MIT, 2007
  5. Sofia Gustafson-Cap ova, Emotions in Speech: Tag set and Acoustic Correlates, term paper, Stockholm University, 2001.
  6. Campbell, J. P. (1997). Speaker recognition: a tutorial. Proceeding of the IEEE, 85 (9), 1437-1462.
  7. Architecture for Automated Tagging and Clustering of Song files According To mood TSSN 2010.
  8. Hevner, K. (1935). Expression in music: a discussion of experimental studies and theories. Psychological Review, 42, 186-204.
  9. Masataka N. The origins of language and the evolution of music: A comparative perspective. Physics of Life Reviews 2009; 6:11–22.
Index Terms

Computer Science
Information Sciences

Keywords

Shuffle's alternative automatic playing playing on one's mood