CFP last date
20 December 2024
Reseach Article

Validation of Software Quality Models using Machine Learning: An Empirical Study

Published on November 2013 by Surbhi Gaur, Savleen Kaur, Inderpreet Kaur
8th National Conference on Next generation Computing Technologies and Applications
Foundation of Computer Science USA
NGCTA - Number 1
November 2013
Authors: Surbhi Gaur, Savleen Kaur, Inderpreet Kaur
25f216a8-638e-4c5b-a0d5-0c82c209ccd0

Surbhi Gaur, Savleen Kaur, Inderpreet Kaur . Validation of Software Quality Models using Machine Learning: An Empirical Study. 8th National Conference on Next generation Computing Technologies and Applications. NGCTA, 1 (November 2013), 1-7.

@article{
author = { Surbhi Gaur, Savleen Kaur, Inderpreet Kaur },
title = { Validation of Software Quality Models using Machine Learning: An Empirical Study },
journal = { 8th National Conference on Next generation Computing Technologies and Applications },
issue_date = { November 2013 },
volume = { NGCTA },
number = { 1 },
month = { November },
year = { 2013 },
issn = 0975-8887,
pages = { 1-7 },
numpages = 7,
url = { /proceedings/ngcta/number1/14189-1302/ },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Proceeding Article
%1 8th National Conference on Next generation Computing Technologies and Applications
%A Surbhi Gaur
%A Savleen Kaur
%A Inderpreet Kaur
%T Validation of Software Quality Models using Machine Learning: An Empirical Study
%J 8th National Conference on Next generation Computing Technologies and Applications
%@ 0975-8887
%V NGCTA
%N 1
%P 1-7
%D 2013
%I International Journal of Computer Applications
Abstract

Software Quality is that significant nonfunctional requirement which is not fulfilled by many software products. In order to identify the faulty classes we can use prediction models using object oriented metrics. This paper empirically analyses the relationship between object oriented metrics and fault proneness of NASA Data sets using six machine Learning classifiers. It has been exhibited that Random Forest provides optimum values for accuracy, precision, sensitivity and specificity by performing Multivariate analysis of NASA Data sets.

References
  1. Aggarwal, K. K. , Singh, Y, Kaur, A. and Malhotra, R. 2006. Empirical Study of Object-Oriented Metrics, Journal of Object Technology, 5, 8.
  2. AHA, D. W. and Daniel, J. J. 1991. Instance-Based Learning Algorithms, Kluwer Academic Publishers, Boston. Manufactured in The Netherlands, Machine Learning, 6,37-66.
  3. Alshayeb M. and Li, W. 2003. An Empirical Validation of Object-Oriented Metrics in Two Different Iterative Software Processes. IEEE transaction on software engineering, 12, 11, 1043-1049.
  4. Basili, V. , Briand L. and Melo, W. L. 1996. A Validation of Object-Oriented Design Metrics as Quality Indicators. IEEE Transactions on Software Engineering, 22, 10,267-271.
  5. Bellini, P. , Bruno, I. , NesiandP. andRogai, D. 2005. Comparing fault-proneness estimation models", in Proc. of 10th IEEE International Conference on Engineering of Complex Computer Systems, 205–214.
  6. Breiman L. 2001. Random Forests. Machine Learning, 45, 1, 5-32.
  7. Breiman, L. 1996. Bagging Predictors. Machine Learning, 26, 123-140
  8. Briand, L. and Wust, J. 2001. Replicated Case Studies for Investigating Quality Factors in Object-Oriented Designs, Empirical Software Engineering: An International Journal , 6(1),11-58.
  9. Briand, L. , Daly, J. and Wust, J. 1999. A Unified Framework for Coupling Measurement in Object-Oriented Systems, IEEE Transactions on software Engineering, 25, 91-121.
  10. Briand, L. , Daly, J. , Porter, V. and Wust, J. 2000. Exploring the relationships between design measures and software quality, Journal of Systems and Software, 5, 245-273.
  11. Briand, L. , Melo, W. L. and Wust J. 2002. Assessing the applicability of fault-proneness models across object oriented software projects, IEEE Trans. on Software Engineering, vol. 28-7, 706–720.
  12. Chidamber, S. and Kemerer, C. F. 1994. A metrics Suite for Object-Oriented Design. IEEE Transactions on Software Engineering. ,SE-20, 6, 476-493.
  13. Chidamber, S. R. and Kemerer, C. F. 1991. Towards a metrics suite for object oriented design", in Proceedings of 6th ACM Conference on Object-Oriented Programming Systems Languagesand Applications (OOPSLA), Phoenix, Arizona, 197–211.
  14. Chidamber, S. , Darcy, D. and Kemerer, C. 1998. Managerial use of Metrics for Object-Oriented Software: An Exploratory Analysis. IEEE Transactions on Software Engineering, 24, 8, 629-639. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed. , vol. 2. Oxford: Clarendon, 1892, pp. 68–73.
  15. Emam, K. El, Benlarbi, S. , Goel, N. and Rai, S. 2001. The Confounding Effect of Class Size on The Validity of Object-Oriented Metrics. IEEE Transactions on Software Engineering, 27, 7, 630-650.
  16. Emam, K. El, Melo, W. and Machado, J. 2001. The Prediction of Faulty Classes Using Object-Oriented Design Metrics, Journal of Systems and Software, 56, 63-75.
  17. Gyimothy, T. , Ferenc, R. and Siket, I. 2005. Empirical validation of object-oriented metrics on open source software for fault prediction. IEEE Trans. Software Engineering, 31, 10, 897– 910.
  18. Henderson-sellers, B. 1996. Object-Oriented Metrics, Measures of Complexity, Prentice Hall, 1996 ISBN: 0-13-239872-9.
  19. Hitz, M. and Montazeri,B. 1995. Measuring Coupling and Cohesion in Object-Oriented Systems. Proc. Int. Symposium on Applied Corporate Computing, Monterrey, Mexico.
  20. Hosmer, D. W. andLemeshow, S. Applied Logistic Regression ISBN: 9780471356325.
  21. Huang, K. 2003. Discriminative Naive Bayesian Classifiers, Department of Computer Science and Engineering, the Chinese University of Hong Kong.
  22. Khoshgoftaar, T. M. and Seliya, N. 2004. Comparative assessment of software quality classification techniques: An empirical study. Empirical Software Engineering, 9, 229–257.
  23. Koru, A. and Liu, H. 2005. Building effective defect prediction models in practice, IEEE Software,. 23–29
  24. Li, W. and Henry, S. 1993. Object Oriented Metrics that Predict Maintainability. Journal of Systemsand Software, 23, 2,111-122.
  25. Lorenz, M. and Kidd,J. 1994. Object-Oriented Software Metrics. Prentice-Hall.
  26. McCabe & Associates, McCabe Object Oriented Tool User's Instructions, 1994.
  27. Menzies, T. , Greenwald, J. and Frank, A. 2007. Data mining static code attributes to learn defect predictors. IEEE Trans. on Software Engineering, 33, 1, 2–13.
  28. Mitchell, T. 1997 Machine Learning, ISBN 0070428077, McGraw Hill, 1997,available at http://www. cs. cmu. edu/~tom/mlbook-chapter-slides. html
  29. Olague, H. , Etzkorn ,L. , Gholston, S. and Quat-tlebaum S. 2007 Empirical Validation of Three Software Metrics Suites to Predict Fault-Proneness of Object-Oriented Classes Developed Using Highly Iterative or Agile Software Development Processes. IEEE Transactions on software Engineering, 33, 8, 402-419.
  30. Rosenberg, L. and Hyatt, L. 1995. Software Quality Metrics for Object Oriented System Environments, NASA Technical Report.
  31. Schroeder, M. 1999. A practical Guide to Object-Oriented Metrics, IT Professional,1-6, 30-36.
  32. Stone, M. 1974. Cross-validatory choice and assessment of statistical predictions, J. Royal Stat. Soc. , 36,111-147.
  33. Subramanyam, R. and Krishnan, M. S. 2003. Empirical Analysis of CK Metrics for Object-Oriented Design Complexity: Implications for Software Defects. IEEE transaction on software engineering, 29, 4, 297-310.
  34. Witten, IH. and Frank, E. 2005. Data mining: practical machine learning tools and techniques. Morgan Kaufmann
  35. http://nasa-softwaredefectdatasets. wikispaces. com/
  36. M. D'Ambros, M. Lanza and R. Robbes, "Evaluating Defect Prediction Approaches: A Benchmark nd an Extensive Comparison," Empir Software Eng , Vol. 17,2012, pp. 531 -577, DOI 10. 1007/s10664-011-9173-9.
Index Terms

Computer Science
Information Sciences

Keywords

Object-oriented Software Metrics Quality Metrics Classifiers Roc Fault Proneness.