We apologize for a recent technical issue with our email system, which temporarily affected account activations. Accounts have now been activated. Authors may proceed with paper submissions. PhDFocusTM
CFP last date
20 December 2024
Reseach Article

Speech/Music Classification using SVM

by R. Thiruvengatanadhan, P. Dhanalakshmi, P. Suresh Kumar
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 65 - Number 6
Year of Publication: 2013
Authors: R. Thiruvengatanadhan, P. Dhanalakshmi, P. Suresh Kumar
10.5120/10931-5875

R. Thiruvengatanadhan, P. Dhanalakshmi, P. Suresh Kumar . Speech/Music Classification using SVM. International Journal of Computer Applications. 65, 6 ( March 2013), 36-41. DOI=10.5120/10931-5875

@article{ 10.5120/10931-5875,
author = { R. Thiruvengatanadhan, P. Dhanalakshmi, P. Suresh Kumar },
title = { Speech/Music Classification using SVM },
journal = { International Journal of Computer Applications },
issue_date = { March 2013 },
volume = { 65 },
number = { 6 },
month = { March },
year = { 2013 },
issn = { 0975-8887 },
pages = { 36-41 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume65/number6/10931-5875/ },
doi = { 10.5120/10931-5875 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T21:18:03.698456+05:30
%A R. Thiruvengatanadhan
%A P. Dhanalakshmi
%A P. Suresh Kumar
%T Speech/Music Classification using SVM
%J International Journal of Computer Applications
%@ 0975-8887
%V 65
%N 6
%P 36-41
%D 2013
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Audio classification serves as the fundamental step towards the rapid growth in audio data volume. Automatic audio classification is very useful in audio indexing; content based audio retrieval and online audio distribution. The accuracy of the classification relies on the strength of the features and classification scheme. In this work both, time domain and frequency domain features are extracted from the input signal. Time domain features are Zero Crossing Rate (ZCR) and Short Time Energy (STE). Frequency domain features are spectral centroid, spectral flux, spectral entropy and spectral roll-off. After feature extraction, classification is carried out, using SVM model. The proposed feature extraction and classification models results in better accuracy in speech/music classification.

References
  1. Boser E. Bernhard, Guyon M. Isabelle, and Vapnik N. Vladimir. A training algorithm for optimal margin classifiers. In 5th Annual ACM Workshop on COLT, pages 144–152. ACM Press, 1992.
  2. J. Breebaart and M. McKinney. Features for audioclassification. Int. Conf. on MIR, 2003.
  3. ] F. Gouyon, F. Pachet, and O. Delerue. Classifyingpercussive sounds: a matter of zero crossing rate. Proceedings of the COST G-6 Conference on DigitalAudio Effects (DAFX-00), December 2000. Verona, Italy.
  4. Hongchen Jiang, JunmeiBai, Shuwu Zhang, and Bo Xu. Svm-based audio scene classification. Proc. IEEE IntConf. Natural Lang. Processing and Knowledge Engineering, pages 131–136, October 2005.
  5. B. Liang, H. Yanli, L. Songyang, C. Jianyun, and W. Lingda. Feature analysis and extraction for audio automatic classification, Proc. IEEE Int. Conf. Systems, pages 767–772, October 2005.
  6. Lim and Chang. Enhancing support vector machine-based speech/music classification using conditional maximum a posteriori criterion. Signal Processing, IET, 6(4):335–340, June 2012.
  7. Lie Lu, Hong-Jiang Zhang, and Stan Z. Li. Content-based audio classification and segmentation by using support vector machines,. Springer-Verlag Multimedia Systems. 8:482–492, February 2003.
  8. Ingo Mierswa1 and Katharina Morik. Automatic feature extraction for classifying audio data,. Machine Learning Journal, 58(2):127–149, February 2005.
  9. Chungsoo Lim Mokpo, Yeon-Woo Lee, and Joon-Hyuk Chang. New techniques for improving the practicality of ansvm-based speech/music classifier. Acoustics, Speech and Signal Processing (ICASSP), pages 1657–1660, March 2012.
  10. S. Nilufar, Edmonton, N. Molla, and K. Hirose. Spectrogram based features selection using multiple kernel learning for speech/music discrimination. kernel learning for speech/music discrimination. Acoustics, Speech and Signal Processing (ICASSP), pages 501–504, March 2012.
  11. C. Panagiotakis and G. Tziritas. A speech/music discriminator based on rms and zero-crossings,. IEEE Trans. Multimedia, 7(5):155–156, February 2005.
  12. G. Peeters. A large set of audio features for sound description. tech. rep. , IRCAM, 2004.
  13. L. Rabiner and R. W. Schafer. Digital processing of speech signals. Pearson Education, 2005.
  14. Toru Taniguchi, MikioTohyama, and Katsuhiko Shirai. Detection of speech and music based on spectral tracking. Speech Communication, 50:547–563, April 2008.
  15. ChangshengXu, C. Namunu, Maddage, and Xi Shao. Automatic music classification and summarization. IEEE Trans. Speech and Audio Processing, 13(3):441–450, May 2005.
Index Terms

Computer Science
Information Sciences

Keywords

Audio classification Feature extraction Zero Crossing Rate(ZCR) Short Time Energy (STE) Spectral centroid Spectral flux Spectral entropy Spectral roll-off Support vector Machine (SVM)