CFP last date
20 January 2025
Reseach Article

Audio Scenarios Detection Technique

by Ajay Kadam, Ramesh M. Kagalkar
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 120 - Number 16
Year of Publication: 2015
Authors: Ajay Kadam, Ramesh M. Kagalkar
10.5120/21314-4297

Ajay Kadam, Ramesh M. Kagalkar . Audio Scenarios Detection Technique. International Journal of Computer Applications. 120, 16 ( June 2015), 33-37. DOI=10.5120/21314-4297

@article{ 10.5120/21314-4297,
author = { Ajay Kadam, Ramesh M. Kagalkar },
title = { Audio Scenarios Detection Technique },
journal = { International Journal of Computer Applications },
issue_date = { June 2015 },
volume = { 120 },
number = { 16 },
month = { June },
year = { 2015 },
issn = { 0975-8887 },
pages = { 33-37 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume120/number16/21314-4297/ },
doi = { 10.5120/21314-4297 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T23:06:24.980698+05:30
%A Ajay Kadam
%A Ramesh M. Kagalkar
%T Audio Scenarios Detection Technique
%J International Journal of Computer Applications
%@ 0975-8887
%V 120
%N 16
%P 33-37
%D 2015
%I Foundation of Computer Science (FCS), NY, USA
Abstract

The proposed research objective is to add to a framework for programmed recognition of sound. In this framework the real errand is to distinguish any information sound stream investigate it & anticipate the likelihood of diverse sounds show up in it. To create and industrially conveyed an adaptable sound web crawler a flexible sound search engine. The calculation is clamor and contortion safe, computationally productive, and hugely adaptable, equipped for rapidly recognizing a short portion of sound stream caught through a phone microphone in the presence of frontal area voices and other predominant commotion, and through voice codec pressure, out of a database of over accessible tracks. The algorithm utilizes a combinatorial hashed time-recurrence group of stars examination of the sound, yielding ordinary properties, for example, transparency, in which numerous tracks combined may each be distinguished.

References
  1. Ajay R. Kadam & Ramesh Kagalkar "Predictive Sound Recognization System" International Journal of Advance Research in Computer Science and Management Studies Volume 2, Issue 11, November 2014.
  2. Namgook Cho & Eun-Kyoung Kim "Enhanced-voice activity detection using acoustic event detection & classification" in IEEE Transactions on Consumer Electronics, Vol. 57, No. 1, February 2011.
  3. Shivaji Chaudhari and Ramesh Kagalkar "A Review of Automatic Speaker recognization and Identifying Speaker Emotion Using Voice Signal" International Journal of Science and Research (IJSR), Volume 3, Issue 11 November 2014.
  4. Geoffroy Peeters"spectral and temporal periodicity representations of rhythm," ieee transactions on audio, speech, and language processing, vol. 19, no. 5, july 2011
  5. Jia-Min Ren, Student Member, IEEE, and Jyh-Shing Roger Jang, Member, IEEE "discovering time constrained sequential patterns for music genre classification" IEEE transactions on audio, speech, and language processing, vol. 20, no. 4, may 2012.
  6. Namgook Choo & Taeyoon Kim "Voice activation system using acoustic event detection and keyword/speaker recognition" 01/2011; DOI: 10. 1109/ICCE. 2011. 5722550
  7. G. Valenzise, L. Gerosa, M. Tagliasacchi, F. Antonacci, and A. Sarti, "Scream and gunshot detection and localization for audio-surveillance systems," in Proc. IEEE Conf. Adv. Video Signal Based Surveill. , 2007, pp. 21–26.
  8. Kyuwoong Hwang and Soo-Young Lee, Member, IEEE "Environmental Audio Scene and Activity Recognitionthrough Mobile-based Crowdsourcing" IEEE Transactions on Consumer Electronics, Vol. 58, No. 2, May 2012.
  9. R. Radhakrishnan, A. Divakaran, and P. Smaragdis, "Audio analysis for surveillance applications," in Proc. IEEE Workshop Applicat. Signal Process. Audio Acoust. , 2005, pp. 158–161.
  10. Proc. (RT-07) Rich Transcription Meeting Recognition Evaluation Plan, [Online]. Available: http://www. nist. gov/speech/tests/rt/rt2007
  11. J. Tchorz and B. Kollmeier, "A model of auditory perception as front end for automatic speech recognition," J. Acoust. Soc. Amer. , vol. 106, no. 4, pp. 2040–2050, 1999.
  12. T. Jaakkola and D. Haussler, "Exploiting generative models in discriminative classifiers," in Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, 1998, vol. 11, pp. 487–493.
  13. V. Wan and S. Renals, "Speaker verification using sequence discriminant support vector machines," IEEE Trans. Speech Audio Process. , vol. 13, no. 2, pp. 203–210, Mar. 2005.
  14. T. Jebara and R. Kondor, "Bhattacharyya and expected likelihood kernels," Lecture Notes in Computer Science, vol. 2777, pp. 57–71, 2003.
  15. W. M. Campbell, D. E. Sturim, and D. A. Reynolds, "Support vector machines using GMM pervectors for speaker verification," IEEE Signal Process. Lett. , vol. 13, no. 5,pp. 308–311, May 2006.
  16. Abeer Alwan, Steven Lulich, Harish Ariskere "The role of subglottal resonances in speech processing algorithms" The Journal of the Acoustical Society of America (Impact Factor:1. 56). 04/2015;137(4):2327-2327.
  17. Fred Richardson, Douglas Reynolds, Najim Dehak "Deep Neural Network Approaches to Speaker and Language Recognition" IEEE Signal Processing Letters (Impact Factor: 1. 64). 10/2015; 22(10):1-1
  18. Jens Kreitewolf, Angela D Friederici, Katharina von Kriegstein "Hemispheric Lateralization of Linguistic Prosody Recognition in Comparison to Speech and Speaker Recognition. "NeuroImage (Impact Factor: 6. 13). 07/2014;102DOI: 10. 1016/j. neuroimage. 2014. 07. 0
  19. Kun Han, Yuxuan Wang, DeLiang Wang, William S. Woods, Ivo Merks,Tao Zhang "Learning Spectral Mapping for Speech Dereverberation and Denoising" Audio, Speech, and Language Processing, IEEE/ACM Transactions on 06/2015; 23(6):982-992. DOI: 10. 1109/TASLP. 2015. 2416653
  20. Mikolaj Kundegorski, Philip J. B. Jackson, Bartosz Zió?ko" Two-Microphone Dereverberation for Automatic Speech Recognition of Polish" Archives of Acoustics 01/2015; 39(3). DOI: 10. 2478/aoa-2014-0045
  21. Kaveri Kamble and Ramesh Kagalkar "Audio Visual Speech Synthesis and Speech Recognition for Hindi Language" International Journal of Computer Science and Information Technologies, Vol. 6 (2) , 2015, 1779-1783.
  22. Abeer Alwan, Steven Lulich, Harish Ariskere" The role of subglottal resonances in speech processing algorithms" The Journal of the Acoustical Society of America (Impact Factor: 1. 56). 04/2015; 137(4):2327-2327. DOI: 10. 1121/1. 4920497
Index Terms

Computer Science
Information Sciences

Keywords

Finger printing Pure tone White noise