We apologize for a recent technical issue with our email system, which temporarily affected account activations. Accounts have now been activated. Authors may proceed with paper submissions. PhDFocusTM
CFP last date
20 November 2024
Call for Paper
December Edition
IJCA solicits high quality original research papers for the upcoming December edition of the journal. The last date of research paper submission is 20 November 2024

Submit your paper
Know more
Reseach Article

Adaptive Learning for Algorithm Selection in Classification

by Nitin Pise, Parag Kulkarni
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 52 - Number 11
Year of Publication: 2012
Authors: Nitin Pise, Parag Kulkarni
10.5120/8244-1753

Nitin Pise, Parag Kulkarni . Adaptive Learning for Algorithm Selection in Classification. International Journal of Computer Applications. 52, 11 ( August 2012), 7-12. DOI=10.5120/8244-1753

@article{ 10.5120/8244-1753,
author = { Nitin Pise, Parag Kulkarni },
title = { Adaptive Learning for Algorithm Selection in Classification },
journal = { International Journal of Computer Applications },
issue_date = { August 2012 },
volume = { 52 },
number = { 11 },
month = { August },
year = { 2012 },
issn = { 0975-8887 },
pages = { 7-12 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume52/number11/8244-1753/ },
doi = { 10.5120/8244-1753 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T20:51:58.004545+05:30
%A Nitin Pise
%A Parag Kulkarni
%T Adaptive Learning for Algorithm Selection in Classification
%J International Journal of Computer Applications
%@ 0975-8887
%V 52
%N 11
%P 7-12
%D 2012
%I Foundation of Computer Science (FCS), NY, USA
Abstract

No learner is generally better than another learner. If a learner performs better than another learner on some learning situations, then the first learner usually performs worse than the second learner on other situations. In other words, no single learning algorithm can perform well and uniformly outperform other algorithms over all learning or data mining tasks. There is an increasing number of algorithms and practices that can be used for the very same application. With the explosion of available learning algorithms, a method for helping user selecting the most appropriate algorithm or combination of algorithms to solve a problem is becoming increasingly important. In this paper we are using meta-learning to relate the performance of machine learning algorithms on the different datasets. The paper concludes by proposing the system which can learn dynamically as per the given data.

References
  1. Kuncheva, L, Bezdek J. , and Duin, R. 2001 Decision Templates for Multiple Classifier Fusion: An Experimental Comparison, Pattern Recognition. 34, (2), pp. 299-314, 2001.
  2. Dietterich, T. 2002 Ensemble Methods in Machine Learning 1st Int. Workshop on Multiple Classifier Systems, in Lecture Notes in Computer Science, F. Roli and J. Kittler, Eds. Vol. 1857, pp. 1-15, 2002.
  3. Alexmandros, K. and Melanie, H. J. 2001 Model Selection via Meta-Learning: A Comparative Study. International Journal on Artificial Intelligence Tools. Vol. 10, No. 4 (2001).
  4. Joachims, T. 1998 Text Categorization with Support Vector Machines: Learning with Many Relevant Features. Proceedings of the European Conference on Machine Learning, Springer.
  5. Schaffer, C. 1994 Cross-validation, stacking and bi- level stacking: Meta-methods for classification learning, In Cheeseman, P. and Oldford R. W. (eds) Selecting Models from Data: Artificial Intelligence and Statistics IV, 51-59.
  6. Wolpert, D. 1996 The lack of a Priori Distinctions between Learning Algorithms, Neural Computation, 8, 1996, 1341-1420.
  7. Mitchell, T. 1997 Machine Learning, McGraw Hill.
  8. Brodley, C. E. J. 1995 Recursive automatic bias selection for classifier construction, Machine Learning, 20, 63-94.
  9. Schaffer, C. J. 1993 Selecting a Classification Methods by Cross Validation, Machine Learning, 13, 135-143.
  10. Kalousis, A. and Hilario, M. 2000 Model Selection via Meta-learning: a Comparative study, Proceedings of the 12th International IEEE Conference on Tools with AI, Canada, 214-220.
  11. Koliastasis, D. and Despotis, D. J. 2004 Rules for Comparing Predictive Data Mining Algorithms by Error Rate, OPSEARCH, VOL. 41, No. 3.
  12. Fan, L. , Lei M. 2006 Reducing Cognitive Overload by Meta-Learning Assisted Algorithm Selection, Proceedings of the 5th IEEE International Conference on Cognitive Informatics, pp. 120-125, 2006.
  13. Frank, A. and Asuncion, A. 2010. UCI machine learning Repository [http://archive. ics. uci. edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
  14. Michie, D. and Spicgelhater, D. 1994 Machine Learning, Neural and Statistical Classification. Elis Horwood Series in Artificial Intelligence, 1994.
  15. Todorvoski, L. and Blockeel, H. 2002 Ranking with Predictive Clustering Trees, Efficient Multi-Relational Data Mining, 2002.
  16. Alexandros, K. and Melanie, H. J. 2001 Model Selection
  17. Peng, Y. , Flach. , P. , Soarces C. and Brazdil, P. , 2002 Improved Dataset Characterization for Meta-learning, Springer LNCS 2534, pp. 141-152, 2002.
Index Terms

Computer Science
Information Sciences

Keywords

Learning algorithms Dataset characteristics algorithm selection