International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 56 - Number 6 |
Year of Publication: 2012 |
Authors: M’bark Iggane, Abdelatif Ennaji, Driss Mammass, Mostafa El Yassa |
10.5120/8899-2925 |
M’bark Iggane, Abdelatif Ennaji, Driss Mammass, Mostafa El Yassa . Self-Training using a K-Nearest Neighbor as a Base Classifier Reinforced by Support Vector Machines. International Journal of Computer Applications. 56, 6 ( October 2012), 43-46. DOI=10.5120/8899-2925
In supervised learning, algorithms infer a general prediction model based on previously labeled data. However, in many real-world machine learning problems, the number of labeled data is small, while the unlabeled data are abundant. Obviously, the reliability of the learned model depends essentially on the size of the training set (labeled data). Indeed, if the amount of labeled data is not high enough, the generalization errors of learned model may be important. In such situation, semi supervised learning algorithm may improve the generalization performance of this model by integrating unlabeled data in the learning process. One of the most classical methods of the semi-supervised learning is the self-training. An advantage of this method is that several traditional supervised learning algorithms are used to build the model in the self-training process.