We apologize for a recent technical issue with our email system, which temporarily affected account activations. Accounts have now been activated. Authors may proceed with paper submissions. PhDFocusTM
CFP last date
20 December 2024
Reseach Article

Emotion based Speaker Recognition with Vector Quantization

Published on May 2014 by Shraddha Bhandavle, Rasika Inamdar, Aarti Bakshi
International Conference on Electronics & Computing Technologies
Foundation of Computer Science USA
ICONECT - Number 1
May 2014
Authors: Shraddha Bhandavle, Rasika Inamdar, Aarti Bakshi
d8c67a6a-bdb0-4294-813b-de0eb343059d

Shraddha Bhandavle, Rasika Inamdar, Aarti Bakshi . Emotion based Speaker Recognition with Vector Quantization. International Conference on Electronics & Computing Technologies. ICONECT, 1 (May 2014), 9-12.

@article{
author = { Shraddha Bhandavle, Rasika Inamdar, Aarti Bakshi },
title = { Emotion based Speaker Recognition with Vector Quantization },
journal = { International Conference on Electronics & Computing Technologies },
issue_date = { May 2014 },
volume = { ICONECT },
number = { 1 },
month = { May },
year = { 2014 },
issn = 0975-8887,
pages = { 9-12 },
numpages = 4,
url = { /proceedings/iconect/number1/16475-1408/ },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Proceeding Article
%1 International Conference on Electronics & Computing Technologies
%A Shraddha Bhandavle
%A Rasika Inamdar
%A Aarti Bakshi
%T Emotion based Speaker Recognition with Vector Quantization
%J International Conference on Electronics & Computing Technologies
%@ 0975-8887
%V ICONECT
%N 1
%P 9-12
%D 2014
%I International Journal of Computer Applications
Abstract

Speech is a most popular biometrics nowadays used for human interaction. An emotion is a mental and a physiological state of a person. Emotion Based Speaker Recognition has attracted many researchers. Emotions are associated with the variety of feelings and thoughts. An emotion based speaker recognition system, recognizes the person's emotionsbased on pitch, speaking style, intensity, sampling frequency. Mel frequency CepstralCoefficient is the first step in a speaker recognition system. In this paper, we are implementing the gender - based modified MFCC approach to differentiate the individuals. For the classification purpose we have used the K-means algorithm.

References
  1. DimitriosVerveridis and Constantine Kotropoulos, "Emotional speech recognition: Resources, features, and methods", Artificial Intelligence and Information Analysis Laboratory, Department of Informatics, Aristotle University of Thessaloniki, Box 451, Thessaloniki 54124, Greece.
  2. BjörnSchuller and Gerhard Rigoll, "Timing Levels in Segment-Based Speech Emotion Recognition", INTERSPEECH 2006 – ICSLP.
  3. AnkurSapra, Nikhil Panwar, SohanPanwar, "Emotion Recognition From Speech", International Journal of Emerging Technology and Advanced Engineering (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 2, Feb 2013).
  4. Ismail shahin, "Speaker Identification in Emotional Environments", Iranian Journal of Electrical and Computer Engineering, Vol. 8, No. 1, WINTER-SPRING 2007.
  5. Nobuo Sato and YasunariObuchi, "Emotion Recognition using MFCC's", Information and Media Technologies 2(3):835-848 92007) reprinted form: Journal of Natural Language Processing 14(4): 83-96 (2007)
  6. Shupeng Xu, Yan Liu and Xiping Liu, "Speaker Recognition and Speech Emotion Recognition Baesd on GMM", Computer Science and Engineering Department Changchun University of Technology Changchun, China.
  7. Shashidhar G. Koolagudi, Kritika Sharma and K. Srenivasa Rao, "Speaker Reciognition in Emotional Environment", School of Computing, Graphic Era University, Dehradun-248002,Uttarakhand, India, School of Information Technology, Indian Institute of Technology Kharagour,Kharagpur-721302,West Bengal, India.
  8. Ling Feng and Lars Kai Hansen, "A New Database For Speaker Recognition", Informatics and Mathematical Modelling,TechnicalUniversity of Denmark Richard PetersensPlads ,Building 321,DK-2800 KongensLyngby, Denmark.
  9. Bjorn Schuller, Gerhard Rigoli and Manfred Lang, "Speech Emotion Recognition Combining Acoustic Features and Linguistic Information IN A Hybrid Support Vector Machine- Belief Network Architecture",Institute for human-computer communicationtechnischeuniversitatMunchen
  10. C. Clavel I. Vasilescu L. Devillers G. Richard T. Ehrette" Fear type emotion recognition for future audio-based surveillance systems" Thales Research and Technology France, RD 128, 91767 PalaiseauCedex, France, LIMSI-CNRS, BP 133, 91403 OrsayCedex, France, TELECOM Paris Tech, 37 rue Dareau, 75014 Paris, France.
  11. SanaulHaq and Philip J. B. Jackson, "Speaker-Dependent Audio-Visual Emotion Recognition", Center for Vision, Speech and Signal Processing (CVSSP), University ofSurrey, UK.
Index Terms

Computer Science
Information Sciences

Keywords

Emotionrecognition From Speech fourier Transform Traditional Mfcc Modern Mfccapproach nearest Neighboralgorithm K-means Vector Quantization.