We apologize for a recent technical issue with our email system, which temporarily affected account activations. Accounts have now been activated. Authors may proceed with paper submissions. PhDFocusTM
CFP last date
20 December 2024
Reseach Article

Representation of Musical Signals using Instrument-Specific Dictionaries

by Mohammadali Azamian
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 175 - Number 9
Year of Publication: 2017
Authors: Mohammadali Azamian
10.5120/ijca2017915107

Mohammadali Azamian . Representation of Musical Signals using Instrument-Specific Dictionaries. International Journal of Computer Applications. 175, 9 ( Oct 2017), 22-26. DOI=10.5120/ijca2017915107

@article{ 10.5120/ijca2017915107,
author = { Mohammadali Azamian },
title = { Representation of Musical Signals using Instrument-Specific Dictionaries },
journal = { International Journal of Computer Applications },
issue_date = { Oct 2017 },
volume = { 175 },
number = { 9 },
month = { Oct },
year = { 2017 },
issn = { 0975-8887 },
pages = { 22-26 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume175/number9/28580-2017915107/ },
doi = { 10.5120/ijca2017915107 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T00:24:36.873505+05:30
%A Mohammadali Azamian
%T Representation of Musical Signals using Instrument-Specific Dictionaries
%J International Journal of Computer Applications
%@ 0975-8887
%V 175
%N 9
%P 22-26
%D 2017
%I Foundation of Computer Science (FCS), NY, USA
Abstract

A new simple method is proposed to synthesize the instrument-specific dictionaries and its use is examined in the time domain musical signal representation. By investigating the spectrum of musical note signals, it is seen that only a small number of frequency elements are significant in the inherent structure of a musical note, and other elements could be omitted. This sparsity is utilized to synthesize note-specific atoms. Firstly, some basic functions are defined from the long-term spectrum of the note signal, called primary atoms. Then the primary atoms that satisfy some conditions are selected as basic atoms and are incorporated to synthesize note-specific atoms. Some usual signal processing windows also are examined such as Gaussian and Hamming windows to synthesize note-specific atoms. The note-specific atoms of an instrument are integrated in an instrument-specific dictionary. A musical signal is represented by mapping to this dictionary by means of the Matching Pursuit algorithm. The proposed method was evaluated on the RWC musical sound database. The results showed that it improves the quality of signal representation compared to some previous methods.

References
  1. D. P. W. Ellis, 2006. “Extracting information from music audio,” Communications of the ACM, vol. 49, no. 8, p. 32.
  2. N. Cho, C. C. J. Kuo, 2010. “Sparse representation of musical signals using source-specific dictionaries,” IEEE Signal Processing Letters, vol. 17, no. 11, pp. 913-916. doi:10.1109/LSP.2010.2071864
  3. G. J. Jang, T. W. Lee, Y. H. Oh, 2003. “Single-channel signal separation using time-domain basis functions,” IEEE Signal Processing Letters, vol. 10, no.6, pp. 168-171. doi:10.1109/LSP.2003.811630
  4. H. Huang, J. Yu, W. Sun, 2014. “Super-resolution mapping via multi-dictionary based sparse representation,” Int. Conf. on Acoustics Speech and Signal Processing, IEEE, Florence, pp. 3523-3527.
  5. Y. Xu, G. Bao, X. Xu, Z.Ye, 2015. “Single-channel speech separation using sequential discriminative dictionary learning,” Signal Processing, vol. 106, pp. 134–140. doi:10.1016/j.sigpro.2014.07.012
  6. M. Yaghoobi, T. Blumensath, M. E. Davies, 2009. “Dictionary learning for sparse approximations with the majorization method,” IEEE Transactions on Signal Processing, vol. 57, no. 6, pp. 2178–2191. doi:10.1109/TSP.2009.2016257
  7. T. Blumensath and M. Davies, 2006. “Sparse and shift-invariant representations of music,” IEEE Transactions on Audio, Speech and Language Processing, vol. 14, no. 1, pp. 50–57. doi:10.1109/TSA.2005.860346
  8. S. A. Abdallah and M. D. Plumbley, 2006. “Unsupervised Analysis of Polyphonic Music by Sparse Coding,” IEEE Transactions on Neural Networks, vol. 17, no. 1, pp. 179–196. doi:10.1109/TNN.2005.861031
  9. Y. Vaizman, B. McFee, G. Lanckriet, 2014. “Codebook-based audio feature representation for music information retrieval,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 22, no. 10, pp. 1483-1493. doi:10.1109/TASLP.2014.2337842
  10. N. Cho, C. C. J. Kuo, 2011. “Sparse music representation with source-specific dictionaries and its application to signal separation,” IEEE Transactions on Audio Speech Lang. Process., vol. 19, no. 2, pp. 337-348. doi:10.1109/TASL.2010.2047810
  11. S. G. Mallat, Z. Zhang, 1993. “Matching pursuit with time-frequency dictionaries,” IEEE Transactions on Signal Processing, vol. 41, no. 12, pp. 3397–3415. doi:10.1109/78.258082
  12. M. Azamian, E. Kabir, S. Seyedin, E. Masehian, 2017. “An adaptive sparse algorithm for synthesizing note-specific atoms by spectrum analysis, applied to musical signal separation,” Advances in electrical and computer engineering, vol. 17, no. 2, pp. 103-112. doi:10.4316/AECE.2017.02014
  13. M. Goto, H. Hashiguchi, T. Nishimura, R. Oka, 2003. “RWC music database: musical instrument sound database,” ISMIR, pp. 229-230.
Index Terms

Computer Science
Information Sciences

Keywords

Signal Representation Signal Mapping Audio Signal Processing Spectral Analysis Signal Reconstruction