CFP last date
20 December 2024
Reseach Article

Incremental-learning made easy

Published on November 2011 by Preeti Mulay, Dr.Parag A. Kulkarni
2nd National Conference on Information and Communication Technology
Foundation of Computer Science USA
NCICT - Number 2
November 2011
Authors: Preeti Mulay, Dr.Parag A. Kulkarni
173b46cf-3c32-450a-93a1-43ae3f799a63

Preeti Mulay, Dr.Parag A. Kulkarni . Incremental-learning made easy. 2nd National Conference on Information and Communication Technology. NCICT, 2 (November 2011), 12-15.

@article{
author = { Preeti Mulay, Dr.Parag A. Kulkarni },
title = { Incremental-learning made easy },
journal = { 2nd National Conference on Information and Communication Technology },
issue_date = { November 2011 },
volume = { NCICT },
number = { 2 },
month = { November },
year = { 2011 },
issn = 0975-8887,
pages = { 12-15 },
numpages = 4,
url = { /proceedings/ncict/number2/4285-ncict012/ },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Proceeding Article
%1 2nd National Conference on Information and Communication Technology
%A Preeti Mulay
%A Dr.Parag A. Kulkarni
%T Incremental-learning made easy
%J 2nd National Conference on Information and Communication Technology
%@ 0975-8887
%V NCICT
%N 2
%P 12-15
%D 2011
%I International Journal of Computer Applications
Abstract

“Incremental-learning” and “Knowledge augmentation” is an important part of “advanced machine learning” field. In this era of internet and 3G the data is populating at tremendous speed. Hence need to implement advance data mining and machine learning concepts to retain the quality of ever increasing data. In this paper we show the application of our newly designed “incremental clustering” algorithm. This paper also survey two more incremental methods and the comparative study proves that our new algorithm works wonders and give best quality results for effectual decision making, estimation and forecasting any numeric domain data. We have successfully designed generalized algorithm suitable for data from various fields including sales, marketing, software, wine, electricity etc. “Confusion matrix” is used to validate quality of results given by our algorithm.

References
  1. Ahmad E. Hassan and Tao Xie, “Mining Software Engineering Data” North Carolina State University, USA, ICSE 2010 Tutorial T18.
  2. Ahmed E. Hassan, Tao Xie, “Software Intelligence: The Future of Mining Software Engineering Data”, FoSER 2010, November 7–8, 2010, Santa Fe, New Mexico, USA. 2010 ACM 978-1-4503-0427- 6/10/11
  3. Alex Berson, Stephen Smith, and Kurt Thearling,“An Overview of Data Mining Techniques”, Excerpted from the book Building Data Mining Applications for CRM.
  4. Blake, C.L. and Merz, C.J. UCI Repository of Machine Learning Databases [http://www.ics.uci.edu/~mlearn/MLRepository.html]. Irvine, CA: University of California, Department of Information and Computer Science, 1998
  5. C. Aggarwal, J. Han, J. Wang, and P. S. Yu, A Framework for Projected Clustering of High Dimensional Data Streams, Proc. 2004 Int. Conf. on Very Large Data Bases (VLDB'04), Toronto, Canada, Aug. 2004.
  6. C. Aggarwal, J. Han, J. Wang, P. S. Yu, A Framework for Clustering Evolving Data Streams, Proc. 2003 Int. Conf. on Very Large Data Bases (VLDB'03), Berlin, Germany, Sept. 2003.
  7. Di Mauro, N., Esposito, F., Ferilli, S., Basile, T.A.: A backtracking strategy for order-independent incremental learning. In de Mantaras, R.L., ed.: Proceedings of ECAI04, IOS Press (2004)
  8. Dr. Parag A. Kulkarni and Preeti Mulay. “Incremental learning using semi-supervised and incremental clustering”, ICAI’09,Las Vegas, USA.
  9. Dr.Parag A. Kulkarni,“Special Session on Learning Methodologies for Classification and Decision Making“, at The 4th Indian International Conference on Artificial Intelligence (IICAI-09) December 16-18, 2009, Tumkur (near Bangalore), India.
  10. Dr.Parag A.Kulkarni,“Pattern based classification and decision making with a case study of AI aspects of knowledge management”, 3rd Indian international conference on AI (IICAI-07), Dec 17-19, 2007, Pune
  11. Euclidean distance from http://en.wikipedia.org/wiki/ Euclidean_distance.
  12. Fahim A.M., Salem A.M., Torkey F.A., Ramadan M.A., "An efficient enhanced k-means clustering algorithm", Journal of Zhejiang University science, vol. 7, no.10, pp.1626-1633, 2006.
  13. Fisher, D.H. (1987), Knowledge Acquisition via Incremental Conceptual Clustering, Machine Learning 2:139-172, reprinted in Shavlik & Dietterich (eds.), Readings in Machine Learning, section 3.2.1.
  14. Ian Davidson, Martin Ester, S.S.Ravi,“Efficient incremental constrained clustering”, KDD 2007.
  15. Joel Scanlan, Jacky Hartnett, and Ray Williams ,“DynamicWEB: Profile Correlation Using COBWEB”,School of Computing, University of Tasmania Hobart, Australia A. Sattar and B.H. Kang (Eds.): AI 2006, LNAI 4304, pp. 1059 – 1063, 2006. Springer-Verlag Berlin Heidelberg 2006
  16. K. A. Abdul Nazeer, M. P. Sebastian,“Improving the Accuracy and Efficiency of the k-means Clustering Algorithm”, Proceedings of the World Congress on Engineering 2009 Vol I WCE 2009, July 1 - 3, 2009, London, U.K.
  17. L.Jegatha,DeborahR.Baskaran, A.Kannan, “A Survey on Internal Validity Measure for Cluster validation”,2009.
  18. Langley, Pat. 1995. Elements of Machine Learning. San Francisco: Morgan Kaufmann
  19. Margaret H. Dunham, “Data Mining: Introductory and Advanced Topics”, Prentice Hall 2003, paper 315pp, ISBN-10: 0130888923, ISBN-13: 9780130888921
  20. Mitchell, Tom. 1997. Machine Learning. McGraw-Hill.
  21. Nils J. Nilsson, "Introduction to Machine Learning - Draft of Incomplete Notes", 1996.
  22. Nitin Namdeo Pise, Parag Kulkarni,“A Survey of Semi-Supervised Learning Methods”, International Conference on Computational Intelligence and Security,2008.
  23. Nitin Namdeo Pise, Parag Kulkarni: Semi- Supervised Learning with SVM and K-Means Clustering Algorithm. IICAI 2009: 463-482 2009
  24. P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis. “Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems”, Elsevier, 47(4):547-553. ISSN: 0167-9236.
  25. Ron Kohavi and Foster Provost, "Special Issue on Applications of Machine Learning and the Knowledge Discovery Process", Machine Learning, 30: 271-274 (1998).
  26. Rui Xu, Donald Wunsch II,“Survey of Clustering Algorithms”, IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 16, NO. 3, MAY 2005
  27. Sunita Jahirabadkar, Parag Kulkarni: Learning High Dimensional Distributed Data Using Hierarchical Subspace Clustering. IICAI 2009: 423-432,2009.
  28. Taghi M., Edward B., Wendell D.,, John P. “Data mining of software development databases”, Software quality journal 9, 161-176, 2001.
  29. Tao Xie, North Carolina State University, USA and Jian Pei, Simon Fraser University, Canada, “Data mining for software engineering”, KDD 2006 tutorial.
  30. Tom M. Mitchell. Does Machine Learning Really Work? AI Magazine 18(3): Fall 1997, 11-20.
  31. Using Data Mining for Wine Quality Assessment, Lecture Notes in Computer Science, 2009, Volume 5808/2009, 66-79, DOI: 10.1007/978-3-642- 04747-3_8
  32. W. Wu and H. Xiong (Eds.), “On Quantitative Evaluation of Clustering Systems”, Information Retrieval and Clustering, 2002, School of Computing, National University of Singapore3 Science Drive 2, Singapore 117543
  33. Weka 3 - Data Mining with Open Source Machine Learning Software, www.cs.waikato.ac.nz/ml/weka/
  34. XLMiner Data Mining Add-in For Excel, www.solver.com/xlminer/ 41. Yiu-Ming Cheung, k*-Means: A new generalized k-means clustering algorithm, Pattern Recognition Letters 24 (2003) 2883–2893.
Index Terms

Computer Science
Information Sciences

Keywords

incremental clustering incremental learning knowledge augmentation