We apologize for a recent technical issue with our email system, which temporarily affected account activations. Accounts have now been activated. Authors may proceed with paper submissions. PhDFocusTM
CFP last date
20 December 2024
Reseach Article

Extraction of Best Attribute Subset using Kruskal's Algorithm

by Sonam R. Yadav, Ravi P. Patki
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 122 - Number 16
Year of Publication: 2015
Authors: Sonam R. Yadav, Ravi P. Patki
10.5120/21781-5058

Sonam R. Yadav, Ravi P. Patki . Extraction of Best Attribute Subset using Kruskal's Algorithm. International Journal of Computer Applications. 122, 16 ( July 2015), 1-5. DOI=10.5120/21781-5058

@article{ 10.5120/21781-5058,
author = { Sonam R. Yadav, Ravi P. Patki },
title = { Extraction of Best Attribute Subset using Kruskal's Algorithm },
journal = { International Journal of Computer Applications },
issue_date = { July 2015 },
volume = { 122 },
number = { 16 },
month = { July },
year = { 2015 },
issn = { 0975-8887 },
pages = { 1-5 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume122/number16/21781-5058/ },
doi = { 10.5120/21781-5058 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T23:10:41.226425+05:30
%A Sonam R. Yadav
%A Ravi P. Patki
%T Extraction of Best Attribute Subset using Kruskal's Algorithm
%J International Journal of Computer Applications
%@ 0975-8887
%V 122
%N 16
%P 1-5
%D 2015
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Data mining is the technique by which one can extract efficient and effective data from huge amount of raw data. There are various techniques for extracting useful data. Attribute Selection is one of the effective methods for this operation. To obtain useful data from enormous amount of data is not a simple task. It contains several phases like pre-processing, classification, analysis etc. Attribute Selection is an important topic in Data Mining, as it is the effective way to reduce dimensionality, to remove irrelevant data, to remove redundant data, & to increase accuracy of the data. It is the process of identifying a subset of the most useful attributes that produces attuned results as the original entire set of attribute. In last few years, there were different techniques for attribute selection. These techniques were judged on two measures i. e. efficiency is time to find the clusters and effectiveness is quality of subset attributes. Some of the techniques are Wrapper approach, Filter Approach, Relief Algorithm, Distributional clustering etc. But each of one having some drawbacks like unable to handle large volumes of data, computational complexity, accuracy is not guaranteed, difficult to evaluate and redundancy detection etc. To overcome some of these problems in attribute selection method this paper proposes technique that aims to provide an effective clustering based attribute selection method for high dimensional data. Initially this technique removes irrelevant attributes depending on some threshold value. Afterwards, using Kruskal's algorithm minimum spanning tree is constructed from these attributes. From that tree some representative attributes are selected by partitioning. This is nothing but the final set of attributes.

References
  1. Liu H. , Motoda H. and Yu L. , Selective sampling approach to active attribute selection, Artif. Intell, 159(1-2), pp 49 -74 (2004)
  2. Modrzejewski M. , Attribute selection using rough sets theory, In Proceedings of the European Conference on Machine Learning, pp 213-226, 1993.
  3. Molina L. C. , Belanche L. and Nebot A. , Attribute selection algorithms: A survey and experimental evaluation, in Proc. IEEE Int. Conf. Data Mining, pp 306-313, 2002.
  4. Guyon I. and Elisseeff A. , An introduction to variable and attribute selection, Journal of Machine Learning Research, 3, pp 1157-1182, 2003.
  5. Dash M. and Liu H. , Attribute Selection for Classification, Intelligent Data Analysis, 1(3), pp 131156, 1997.
  6. Pereira F. , Tishby N. and Lee L. , Distributional clustering of English words, In Proceedings of the 31st Annual Meeting on Association For Computational Linguistics, pp 183-190, 1993.
  7. Dhillon I. S. , Mallela S. and Kumar R. , A divisive information theoretic attribute clustering algorithm for text classification, J. Mach. Learn. Res. , 3,pp 12651287, 2003.
  8. N. Magendiran and J. Jayaranjani, An Efficient Fast Clustering-Based Feature Subset Selection Algorithm for High-Dimensional Data - (ICETS'14)
  9. Mr. M. Senthil Kumar and Ms. V. Latha Jothi, A Fast Clustering Based Feature Subset Selection Using Affinity Propagation Algorithm - (ICGICT'14)
  10. T. Jaga Priya Vathana, C. Saravanabhavan, and Dr. J. Vellingiri, A Survey On Feature Selection Algorithm For high Dimensional Data Using Fuzzy Logic - (IJES)
  11. R. Butterworth, G. Piatetsky-Shapiro, and D. A. Simovici, "On Feature Selection through Clustering," Proc. IEEE Fifth Int'l Conf. Data Mining, pp. 581-584, 2005
  12. A. Arauzo-Azofra, J. M. Benitez, and J. L. Castro, A Feature Set Measure Based on Relief, Proc. Fifth Int?l Conf. Recent Advances in Soft Computing, pp. 104-109, 2004s
  13. Saurabh Soni & Pratik Patel, "IFSS – An Improved Filter-Wrapper Algorithm for Feature Subset Selection", International Journal of Computer Application (0975-8887), Volume 95-No. 14, June 2014.
Index Terms

Computer Science
Information Sciences

Keywords

Attribute Selection Clustering Data Mining Graph-Based Clustering Minimum Spanning Tree.