We apologize for a recent technical issue with our email system, which temporarily affected account activations. Accounts have now been activated. Authors may proceed with paper submissions. PhDFocusTM
CFP last date
20 November 2024
Call for Paper
December Edition
IJCA solicits high quality original research papers for the upcoming December edition of the journal. The last date of research paper submission is 20 November 2024

Submit your paper
Know more
Reseach Article

Self-Training using a K-Nearest Neighbor as a Base Classifier Reinforced by Support Vector Machines

by M’bark Iggane, Abdelatif Ennaji, Driss Mammass, Mostafa El Yassa
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 56 - Number 6
Year of Publication: 2012
Authors: M’bark Iggane, Abdelatif Ennaji, Driss Mammass, Mostafa El Yassa
10.5120/8899-2925

M’bark Iggane, Abdelatif Ennaji, Driss Mammass, Mostafa El Yassa . Self-Training using a K-Nearest Neighbor as a Base Classifier Reinforced by Support Vector Machines. International Journal of Computer Applications. 56, 6 ( October 2012), 43-46. DOI=10.5120/8899-2925

@article{ 10.5120/8899-2925,
author = { M’bark Iggane, Abdelatif Ennaji, Driss Mammass, Mostafa El Yassa },
title = { Self-Training using a K-Nearest Neighbor as a Base Classifier Reinforced by Support Vector Machines },
journal = { International Journal of Computer Applications },
issue_date = { October 2012 },
volume = { 56 },
number = { 6 },
month = { October },
year = { 2012 },
issn = { 0975-8887 },
pages = { 43-46 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume56/number6/8899-2925/ },
doi = { 10.5120/8899-2925 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T20:58:11.872646+05:30
%A M’bark Iggane
%A Abdelatif Ennaji
%A Driss Mammass
%A Mostafa El Yassa
%T Self-Training using a K-Nearest Neighbor as a Base Classifier Reinforced by Support Vector Machines
%J International Journal of Computer Applications
%@ 0975-8887
%V 56
%N 6
%P 43-46
%D 2012
%I Foundation of Computer Science (FCS), NY, USA
Abstract

In supervised learning, algorithms infer a general prediction model based on previously labeled data. However, in many real-world machine learning problems, the number of labeled data is small, while the unlabeled data are abundant. Obviously, the reliability of the learned model depends essentially on the size of the training set (labeled data). Indeed, if the amount of labeled data is not high enough, the generalization errors of learned model may be important. In such situation, semi supervised learning algorithm may improve the generalization performance of this model by integrating unlabeled data in the learning process. One of the most classical methods of the semi-supervised learning is the self-training. An advantage of this method is that several traditional supervised learning algorithms are used to build the model in the self-training process.

References
  1. A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In Proceedings of the 11th Annual Conference on Computational Learning Theory, pages 92–100, Madison, WI, 1998. ACM Press
  2. B. Maeireizo, Litman, D. , & Hwa, R. Co-training for predicting emotions with spoken dialogue data. The Companion Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL), 2004.
  3. C. Rosenberg, et al. , "Semi-supervised self-training of object detection models," Seventh IEEE Workshop on Applications of Computer Vision, 2005.
  4. D. Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. In Meeting of the Association for Computational Linguistics, pages 189–196, 1995.
  5. E. Riloff, Wiebe, J. , &Wilson, T. Learning subjective nouns using extraction pattern bootstrapping. Proceedings of the Seventh Conference on Natural Language Learning (CoNLL-2003).
  6. K. Nigam, A. McCallum, S. Thrun, and T. Mitchell. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2/3) :103–134, 2000.
  7. Ming Li, Zhi-hua Zhou - In: Proc. of the Advances in Knowledge Discovery and Data Mining (PAKDD 2005). LNAI 3518 , 2005
  8. O. Chapelle, B. Scholkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, 2010.
  9. T. Joachims. Text categorization with support vector machines: Learning with many relevant features. In Proceedings of the European Conference on Machine Learning. Springer, 1998.
  10. T. Joachims. Transductive inference for text classification using support vector machines. In Proceedings of ICML-99,16th International Conference on Machine Learning, pp. 200–209, Bled, SL. Morgan Kaufmann Publishers, San Francisco, US, 1999.
  11. Scholkopf Bernhard, Smola Alexander. Learning with Kernels, Support Vector Machine, MIT Press, London, 2002
  12. X. Zhu. Semi-supervised learning literature survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison, 2007.
  13. Wei Liu, Sanjay Chawla Class confidence weighted kNN algorithms for imbalanced data sets. Proceeding PAKDD'11 Proceedings of the 15th Pacific-Asia conference on Advances in knowledge discovery and data mining - Volume Part II, pp. 345-356. 2011.
Index Terms

Computer Science
Information Sciences

Keywords

Semi-supervised Learning Self-training k-NN SVM